00:00:00.001 Started by upstream project "autotest-nightly" build number 4284 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3647 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.167 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.170 The recommended git tool is: git 00:00:00.170 using credential 00000000-0000-0000-0000-000000000002 00:00:00.173 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.220 Fetching changes from the remote Git repository 00:00:00.221 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.260 Using shallow fetch with depth 1 00:00:00.260 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.260 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.319 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.544 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.554 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.564 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.564 > git config core.sparsecheckout # timeout=10 00:00:10.576 > git read-tree -mu HEAD # timeout=10 00:00:10.591 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.611 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.611 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.704 [Pipeline] Start of Pipeline 00:00:10.720 [Pipeline] library 00:00:10.722 Loading library shm_lib@master 00:00:10.722 Library shm_lib@master is cached. Copying from home. 00:00:10.740 [Pipeline] node 00:00:10.758 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:10.761 [Pipeline] { 00:00:10.774 [Pipeline] catchError 00:00:10.776 [Pipeline] { 00:00:10.790 [Pipeline] wrap 00:00:10.801 [Pipeline] { 00:00:10.810 [Pipeline] stage 00:00:10.812 [Pipeline] { (Prologue) 00:00:10.834 [Pipeline] echo 00:00:10.835 Node: VM-host-SM38 00:00:10.842 [Pipeline] cleanWs 00:00:10.856 [WS-CLEANUP] Deleting project workspace... 00:00:10.856 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.867 [WS-CLEANUP] done 00:00:11.070 [Pipeline] setCustomBuildProperty 00:00:11.164 [Pipeline] httpRequest 00:00:11.549 [Pipeline] echo 00:00:11.550 Sorcerer 10.211.164.20 is alive 00:00:11.558 [Pipeline] retry 00:00:11.560 [Pipeline] { 00:00:11.572 [Pipeline] httpRequest 00:00:11.577 HttpMethod: GET 00:00:11.578 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.579 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.580 Response Code: HTTP/1.1 200 OK 00:00:11.581 Success: Status code 200 is in the accepted range: 200,404 00:00:11.582 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.446 [Pipeline] } 00:00:12.467 [Pipeline] // retry 00:00:12.474 [Pipeline] sh 00:00:12.761 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.778 [Pipeline] httpRequest 00:00:13.143 [Pipeline] echo 00:00:13.145 Sorcerer 10.211.164.20 is alive 00:00:13.154 [Pipeline] retry 00:00:13.156 [Pipeline] { 00:00:13.171 [Pipeline] httpRequest 00:00:13.177 HttpMethod: GET 00:00:13.177 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:13.178 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:13.192 Response Code: HTTP/1.1 200 OK 00:00:13.192 Success: Status code 200 is in the accepted range: 200,404 00:00:13.193 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:14.109 [Pipeline] } 00:01:14.126 [Pipeline] // retry 00:01:14.134 [Pipeline] sh 00:01:14.421 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:17.737 [Pipeline] sh 00:01:18.022 + git -C spdk log --oneline -n5 00:01:18.022 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:18.022 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:01:18.022 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:18.022 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:18.022 029355612 bdev_ut: add manual examine bdev unit test case 00:01:18.043 [Pipeline] writeFile 00:01:18.057 [Pipeline] sh 00:01:18.341 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:18.356 [Pipeline] sh 00:01:18.641 + cat autorun-spdk.conf 00:01:18.641 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.641 SPDK_TEST_NVME=1 00:01:18.641 SPDK_TEST_FTL=1 00:01:18.641 SPDK_TEST_ISAL=1 00:01:18.641 SPDK_RUN_ASAN=1 00:01:18.641 SPDK_RUN_UBSAN=1 00:01:18.641 SPDK_TEST_XNVME=1 00:01:18.641 SPDK_TEST_NVME_FDP=1 00:01:18.641 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.650 RUN_NIGHTLY=1 00:01:18.652 [Pipeline] } 00:01:18.667 [Pipeline] // stage 00:01:18.684 [Pipeline] stage 00:01:18.686 [Pipeline] { (Run VM) 00:01:18.700 [Pipeline] sh 00:01:18.986 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.986 + echo 'Start stage prepare_nvme.sh' 00:01:18.986 Start stage prepare_nvme.sh 00:01:18.986 + [[ -n 10 ]] 00:01:18.986 + disk_prefix=ex10 00:01:18.986 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:18.986 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:18.986 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:18.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.986 ++ SPDK_TEST_NVME=1 00:01:18.986 ++ SPDK_TEST_FTL=1 00:01:18.986 ++ SPDK_TEST_ISAL=1 00:01:18.986 ++ SPDK_RUN_ASAN=1 00:01:18.986 ++ SPDK_RUN_UBSAN=1 00:01:18.986 ++ SPDK_TEST_XNVME=1 00:01:18.986 ++ SPDK_TEST_NVME_FDP=1 00:01:18.986 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.986 ++ RUN_NIGHTLY=1 00:01:18.986 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:18.986 + nvme_files=() 00:01:18.986 + declare -A nvme_files 00:01:18.986 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.986 + nvme_files['nvme.img']=5G 00:01:18.986 + nvme_files['nvme-cmb.img']=5G 00:01:18.986 + nvme_files['nvme-multi0.img']=4G 00:01:18.986 + nvme_files['nvme-multi1.img']=4G 00:01:18.986 + nvme_files['nvme-multi2.img']=4G 00:01:18.986 + nvme_files['nvme-openstack.img']=8G 00:01:18.986 + nvme_files['nvme-zns.img']=5G 00:01:18.986 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.986 + (( SPDK_TEST_FTL == 1 )) 00:01:18.986 + nvme_files["nvme-ftl.img"]=6G 00:01:18.986 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.986 + nvme_files["nvme-fdp.img"]=1G 00:01:18.986 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.986 + for nvme in "${!nvme_files[@]}" 00:01:18.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:18.986 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.986 + for nvme in "${!nvme_files[@]}" 00:01:18.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:19.248 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:19.248 + for nvme in "${!nvme_files[@]}" 00:01:19.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:19.248 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.248 + for nvme in "${!nvme_files[@]}" 00:01:19.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:19.248 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.248 + for nvme in "${!nvme_files[@]}" 00:01:19.248 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:19.248 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.248 + for nvme in "${!nvme_files[@]}" 00:01:19.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:19.509 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.509 + for nvme in "${!nvme_files[@]}" 00:01:19.509 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:19.509 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.509 + for nvme in "${!nvme_files[@]}" 00:01:19.509 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:19.509 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:19.509 + for nvme in "${!nvme_files[@]}" 00:01:19.509 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:19.770 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.770 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:19.770 + echo 'End stage prepare_nvme.sh' 00:01:19.770 End stage prepare_nvme.sh 00:01:19.783 [Pipeline] sh 00:01:20.068 + DISTRO=fedora39 00:01:20.068 + CPUS=10 00:01:20.068 + RAM=12288 00:01:20.068 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.068 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:20.068 00:01:20.068 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:20.068 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:20.068 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:20.068 HELP=0 00:01:20.068 DRY_RUN=0 00:01:20.068 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:20.068 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:20.068 NVME_AUTO_CREATE=0 00:01:20.068 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:20.068 NVME_CMB=,,,, 00:01:20.068 NVME_PMR=,,,, 00:01:20.068 NVME_ZNS=,,,, 00:01:20.068 NVME_MS=true,,,, 00:01:20.068 NVME_FDP=,,,on, 00:01:20.068 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.068 SPDK_VAGRANT_VMCPU=10 00:01:20.068 SPDK_VAGRANT_VMRAM=12288 00:01:20.068 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.068 SPDK_VAGRANT_HTTP_PROXY= 00:01:20.068 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.068 SPDK_OPENSTACK_NETWORK=0 00:01:20.068 VAGRANT_PACKAGE_BOX=0 00:01:20.068 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.068 FORCE_DISTRO=true 00:01:20.068 VAGRANT_BOX_VERSION= 00:01:20.068 EXTRA_VAGRANTFILES= 00:01:20.068 NIC_MODEL=e1000 00:01:20.068 00:01:20.068 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:20.068 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:22.621 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.925 ==> default: Creating image (snapshot of base box volume). 00:01:23.210 ==> default: Creating domain with the following settings... 00:01:23.210 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732045856_979e54120b882009a68f 00:01:23.210 ==> default: -- Domain type: kvm 00:01:23.210 ==> default: -- Cpus: 10 00:01:23.210 ==> default: -- Feature: acpi 00:01:23.210 ==> default: -- Feature: apic 00:01:23.210 ==> default: -- Feature: pae 00:01:23.210 ==> default: -- Memory: 12288M 00:01:23.210 ==> default: -- Memory Backing: hugepages: 00:01:23.210 ==> default: -- Management MAC: 00:01:23.210 ==> default: -- Loader: 00:01:23.210 ==> default: -- Nvram: 00:01:23.210 ==> default: -- Base box: spdk/fedora39 00:01:23.210 ==> default: -- Storage pool: default 00:01:23.210 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732045856_979e54120b882009a68f.img (20G) 00:01:23.210 ==> default: -- Volume Cache: default 00:01:23.210 ==> default: -- Kernel: 00:01:23.210 ==> default: -- Initrd: 00:01:23.210 ==> default: -- Graphics Type: vnc 00:01:23.210 ==> default: -- Graphics Port: -1 00:01:23.210 ==> default: -- Graphics IP: 127.0.0.1 00:01:23.210 ==> default: -- Graphics Password: Not defined 00:01:23.210 ==> default: -- Video Type: cirrus 00:01:23.210 ==> default: -- Video VRAM: 9216 00:01:23.210 ==> default: -- Sound Type: 00:01:23.210 ==> default: -- Keymap: en-us 00:01:23.210 ==> default: -- TPM Path: 00:01:23.210 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:23.210 ==> default: -- Command line args: 00:01:23.210 ==> default: -> value=-device, 00:01:23.210 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:23.210 ==> default: -> value=-drive, 00:01:23.210 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:23.210 ==> default: -> value=-device, 00:01:23.210 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:23.210 ==> default: -> value=-device, 00:01:23.210 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:23.210 ==> default: -> value=-drive, 00:01:23.210 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:23.210 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:23.211 ==> default: -> value=-drive, 00:01:23.211 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.211 ==> default: -> value=-drive, 00:01:23.211 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.211 ==> default: -> value=-drive, 00:01:23.211 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:23.211 ==> default: -> value=-drive, 00:01:23.211 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:23.211 ==> default: -> value=-device, 00:01:23.211 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.211 ==> default: Creating shared folders metadata... 00:01:23.211 ==> default: Starting domain. 00:01:25.129 ==> default: Waiting for domain to get an IP address... 00:01:43.291 ==> default: Waiting for SSH to become available... 00:01:43.291 ==> default: Configuring and enabling network interfaces... 00:01:47.494 default: SSH address: 192.168.121.66:22 00:01:47.494 default: SSH username: vagrant 00:01:47.494 default: SSH auth method: private key 00:01:48.878 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.021 ==> default: Mounting SSHFS shared folder... 00:01:59.573 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.573 ==> default: Checking Mount.. 00:02:00.518 ==> default: Folder Successfully Mounted! 00:02:00.518 00:02:00.518 SUCCESS! 00:02:00.518 00:02:00.518 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.518 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.518 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.518 00:02:00.529 [Pipeline] } 00:02:00.545 [Pipeline] // stage 00:02:00.555 [Pipeline] dir 00:02:00.556 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:00.558 [Pipeline] { 00:02:00.572 [Pipeline] catchError 00:02:00.574 [Pipeline] { 00:02:00.588 [Pipeline] sh 00:02:00.878 + vagrant ssh-config --host vagrant 00:02:00.878 + tee ssh_conf 00:02:00.878 + sed -ne '/^Host/,$p' 00:02:03.420 Host vagrant 00:02:03.420 HostName 192.168.121.66 00:02:03.420 User vagrant 00:02:03.420 Port 22 00:02:03.420 UserKnownHostsFile /dev/null 00:02:03.420 StrictHostKeyChecking no 00:02:03.420 PasswordAuthentication no 00:02:03.420 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.420 IdentitiesOnly yes 00:02:03.420 LogLevel FATAL 00:02:03.420 ForwardAgent yes 00:02:03.420 ForwardX11 yes 00:02:03.420 00:02:03.434 [Pipeline] withEnv 00:02:03.436 [Pipeline] { 00:02:03.451 [Pipeline] sh 00:02:03.729 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:03.729 source /etc/os-release 00:02:03.729 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.729 # Minimal, systemd-like check. 00:02:03.729 if [[ -e /.dockerenv ]]; then 00:02:03.729 # Clear garbage from the node'\''s name: 00:02:03.729 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.729 # $HOSTNAME is the actual container id 00:02:03.729 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.729 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.729 # We can assume this is a mount from a host where container is running, 00:02:03.729 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.729 container="$(< /etc/hostname) ($agent)" 00:02:03.729 else 00:02:03.729 # Fallback 00:02:03.729 container=$agent 00:02:03.729 fi 00:02:03.729 fi 00:02:03.729 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.729 ' 00:02:03.739 [Pipeline] } 00:02:03.759 [Pipeline] // withEnv 00:02:03.767 [Pipeline] setCustomBuildProperty 00:02:03.781 [Pipeline] stage 00:02:03.783 [Pipeline] { (Tests) 00:02:03.800 [Pipeline] sh 00:02:04.078 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.090 [Pipeline] sh 00:02:04.367 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.382 [Pipeline] timeout 00:02:04.382 Timeout set to expire in 50 min 00:02:04.384 [Pipeline] { 00:02:04.398 [Pipeline] sh 00:02:04.677 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:04.936 HEAD is now at f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:04.948 [Pipeline] sh 00:02:05.226 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:05.497 [Pipeline] sh 00:02:05.777 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:05.794 [Pipeline] sh 00:02:06.073 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:06.073 ++ readlink -f spdk_repo 00:02:06.073 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.073 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.073 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.073 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.073 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.073 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.073 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.073 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:06.073 + cd /home/vagrant/spdk_repo 00:02:06.073 + source /etc/os-release 00:02:06.073 ++ NAME='Fedora Linux' 00:02:06.073 ++ VERSION='39 (Cloud Edition)' 00:02:06.073 ++ ID=fedora 00:02:06.073 ++ VERSION_ID=39 00:02:06.073 ++ VERSION_CODENAME= 00:02:06.073 ++ PLATFORM_ID=platform:f39 00:02:06.073 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.073 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.073 ++ LOGO=fedora-logo-icon 00:02:06.073 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.073 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.073 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.073 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.073 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.073 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.073 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.073 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.073 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.073 ++ SUPPORT_END=2024-11-12 00:02:06.073 ++ VARIANT='Cloud Edition' 00:02:06.073 ++ VARIANT_ID=cloud 00:02:06.073 + uname -a 00:02:06.073 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.073 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:06.591 Hugepages 00:02:06.591 node hugesize free / total 00:02:06.591 node0 1048576kB 0 / 0 00:02:06.591 node0 2048kB 0 / 0 00:02:06.591 00:02:06.591 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.850 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.850 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.850 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:06.850 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:06.850 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:06.850 + rm -f /tmp/spdk-ld-path 00:02:06.850 + source autorun-spdk.conf 00:02:06.850 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.850 ++ SPDK_TEST_NVME=1 00:02:06.850 ++ SPDK_TEST_FTL=1 00:02:06.850 ++ SPDK_TEST_ISAL=1 00:02:06.850 ++ SPDK_RUN_ASAN=1 00:02:06.850 ++ SPDK_RUN_UBSAN=1 00:02:06.850 ++ SPDK_TEST_XNVME=1 00:02:06.850 ++ SPDK_TEST_NVME_FDP=1 00:02:06.850 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.850 ++ RUN_NIGHTLY=1 00:02:06.850 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.850 + [[ -n '' ]] 00:02:06.850 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.850 + for M in /var/spdk/build-*-manifest.txt 00:02:06.850 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:06.850 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.850 + for M in /var/spdk/build-*-manifest.txt 00:02:06.851 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.851 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.851 + for M in /var/spdk/build-*-manifest.txt 00:02:06.851 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.851 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.851 ++ uname 00:02:06.851 + [[ Linux == \L\i\n\u\x ]] 00:02:06.851 + sudo dmesg -T 00:02:06.851 + sudo dmesg --clear 00:02:06.851 + dmesg_pid=5037 00:02:06.851 + [[ Fedora Linux == FreeBSD ]] 00:02:06.851 + sudo dmesg -Tw 00:02:06.851 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.851 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.851 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.851 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.851 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.851 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.851 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.851 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.851 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.851 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.851 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.851 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.851 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.851 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.851 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.851 19:51:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:06.851 19:51:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.851 19:51:40 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:06.851 19:51:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:06.851 19:51:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.851 19:51:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:06.851 19:51:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:06.851 19:51:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:06.851 19:51:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.851 19:51:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.851 19:51:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.851 19:51:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.851 19:51:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.851 19:51:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.851 19:51:40 -- paths/export.sh@5 -- $ export PATH 00:02:06.851 19:51:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.851 19:51:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:06.851 19:51:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:06.851 19:51:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732045900.XXXXXX 00:02:06.851 19:51:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732045900.BjUZwN 00:02:06.851 19:51:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:06.851 19:51:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:06.851 19:51:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:06.851 19:51:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:06.851 19:51:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.851 19:51:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:06.851 19:51:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:06.851 19:51:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.110 19:51:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:07.110 19:51:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:07.110 19:51:40 -- pm/common@17 -- $ local monitor 00:02:07.110 19:51:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.110 19:51:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.110 19:51:40 -- pm/common@25 -- $ sleep 1 00:02:07.110 19:51:40 -- pm/common@21 -- $ date +%s 00:02:07.110 19:51:40 -- pm/common@21 -- $ date +%s 00:02:07.111 19:51:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732045900 00:02:07.111 19:51:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732045900 00:02:07.111 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732045900_collect-cpu-load.pm.log 00:02:07.111 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732045900_collect-vmstat.pm.log 00:02:08.047 19:51:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:08.047 19:51:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.047 19:51:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.047 19:51:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.047 19:51:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.047 Tue Nov 19 07:51:41 PM UTC 2024 00:02:08.047 19:51:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.047 v25.01-pre-199-gf22e807f1 00:02:08.047 19:51:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:08.047 19:51:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:08.047 19:51:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.047 19:51:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.047 19:51:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.047 ************************************ 00:02:08.047 START TEST asan 00:02:08.047 ************************************ 00:02:08.047 using asan 00:02:08.047 19:51:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:08.047 00:02:08.047 real 0m0.000s 00:02:08.047 user 0m0.000s 00:02:08.047 sys 0m0.000s 00:02:08.047 19:51:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.047 19:51:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.047 ************************************ 00:02:08.047 END TEST asan 00:02:08.047 ************************************ 00:02:08.047 19:51:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.047 19:51:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.047 19:51:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.047 19:51:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.047 19:51:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.047 ************************************ 00:02:08.047 START TEST ubsan 00:02:08.047 ************************************ 00:02:08.047 using ubsan 00:02:08.047 19:51:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:08.047 00:02:08.047 real 0m0.000s 00:02:08.047 user 0m0.000s 00:02:08.047 sys 0m0.000s 00:02:08.047 ************************************ 00:02:08.047 END TEST ubsan 00:02:08.047 19:51:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.047 19:51:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.047 ************************************ 00:02:08.047 19:51:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.047 19:51:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.047 19:51:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.047 19:51:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:08.047 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.047 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.682 Using 'verbs' RDMA provider 00:02:19.223 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.219 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.219 Creating mk/config.mk...done. 00:02:29.219 Creating mk/cc.flags.mk...done. 00:02:29.219 Type 'make' to build. 00:02:29.219 19:52:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:29.219 19:52:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.219 19:52:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.219 19:52:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.219 ************************************ 00:02:29.219 START TEST make 00:02:29.219 ************************************ 00:02:29.219 19:52:02 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:29.494 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:29.494 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:29.494 meson setup builddir \ 00:02:29.494 -Dwith-libaio=enabled \ 00:02:29.495 -Dwith-liburing=enabled \ 00:02:29.495 -Dwith-libvfn=disabled \ 00:02:29.495 -Dwith-spdk=disabled \ 00:02:29.495 -Dexamples=false \ 00:02:29.495 -Dtests=false \ 00:02:29.495 -Dtools=false && \ 00:02:29.495 meson compile -C builddir && \ 00:02:29.495 cd -) 00:02:29.495 make[1]: Nothing to be done for 'all'. 00:02:31.395 The Meson build system 00:02:31.395 Version: 1.5.0 00:02:31.395 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:31.395 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:31.395 Build type: native build 00:02:31.395 Project name: xnvme 00:02:31.395 Project version: 0.7.5 00:02:31.395 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:31.395 C linker for the host machine: cc ld.bfd 2.40-14 00:02:31.395 Host machine cpu family: x86_64 00:02:31.395 Host machine cpu: x86_64 00:02:31.395 Message: host_machine.system: linux 00:02:31.395 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:31.395 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:31.395 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:31.395 Run-time dependency threads found: YES 00:02:31.395 Has header "setupapi.h" : NO 00:02:31.395 Has header "linux/blkzoned.h" : YES 00:02:31.395 Has header "linux/blkzoned.h" : YES (cached) 00:02:31.395 Has header "libaio.h" : YES 00:02:31.395 Library aio found: YES 00:02:31.395 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.395 Run-time dependency liburing found: YES 2.2 00:02:31.395 Dependency libvfn skipped: feature with-libvfn disabled 00:02:31.395 Found CMake: /usr/bin/cmake (3.27.7) 00:02:31.395 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:31.395 Subproject spdk : skipped: feature with-spdk disabled 00:02:31.395 Run-time dependency appleframeworks found: NO (tried framework) 00:02:31.395 Run-time dependency appleframeworks found: NO (tried framework) 00:02:31.395 Library rt found: YES 00:02:31.395 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:31.395 Configuring xnvme_config.h using configuration 00:02:31.395 Configuring xnvme.spec using configuration 00:02:31.395 Run-time dependency bash-completion found: YES 2.11 00:02:31.395 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:31.395 Program cp found: YES (/usr/bin/cp) 00:02:31.395 Build targets in project: 3 00:02:31.395 00:02:31.395 xnvme 0.7.5 00:02:31.395 00:02:31.395 Subprojects 00:02:31.395 spdk : NO Feature 'with-spdk' disabled 00:02:31.395 00:02:31.395 User defined options 00:02:31.395 examples : false 00:02:31.395 tests : false 00:02:31.395 tools : false 00:02:31.395 with-libaio : enabled 00:02:31.395 with-liburing: enabled 00:02:31.395 with-libvfn : disabled 00:02:31.395 with-spdk : disabled 00:02:31.395 00:02:31.395 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.656 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:31.656 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:31.916 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:31.916 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:31.916 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:31.917 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:31.917 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:31.917 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:31.917 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:31.917 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:31.917 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:31.917 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:31.917 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:31.917 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:31.917 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:31.917 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:31.917 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:31.917 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:31.917 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:31.917 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:31.917 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:31.917 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:31.917 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:31.917 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:31.917 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:31.917 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:31.917 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:32.175 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:32.175 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:32.175 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:32.175 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:32.175 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:32.175 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:32.175 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:32.175 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:32.175 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:32.175 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:32.175 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:32.175 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:32.175 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:32.175 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:32.175 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:32.175 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:32.175 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:32.175 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:32.175 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:32.175 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:32.175 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:32.175 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:32.175 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:32.175 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:32.175 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:32.175 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:32.175 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:32.175 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:32.175 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:32.175 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:32.175 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:32.175 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:32.175 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:32.175 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:32.432 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:32.432 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:32.432 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:32.432 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:32.432 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:32.432 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:32.432 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:32.432 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:32.432 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:32.432 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:32.432 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:32.432 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:32.689 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:32.689 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:32.689 [75/76] Linking static target lib/libxnvme.a 00:02:32.689 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:32.690 INFO: autodetecting backend as ninja 00:02:32.690 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:32.947 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:39.548 The Meson build system 00:02:39.548 Version: 1.5.0 00:02:39.548 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:39.548 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.548 Build type: native build 00:02:39.548 Program cat found: YES (/usr/bin/cat) 00:02:39.548 Project name: DPDK 00:02:39.548 Project version: 24.03.0 00:02:39.548 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.548 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.548 Host machine cpu family: x86_64 00:02:39.548 Host machine cpu: x86_64 00:02:39.548 Message: ## Building in Developer Mode ## 00:02:39.548 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.548 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.548 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.548 Program python3 found: YES (/usr/bin/python3) 00:02:39.548 Program cat found: YES (/usr/bin/cat) 00:02:39.548 Compiler for C supports arguments -march=native: YES 00:02:39.548 Checking for size of "void *" : 8 00:02:39.548 Checking for size of "void *" : 8 (cached) 00:02:39.548 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:39.548 Library m found: YES 00:02:39.548 Library numa found: YES 00:02:39.548 Has header "numaif.h" : YES 00:02:39.548 Library fdt found: NO 00:02:39.548 Library execinfo found: NO 00:02:39.548 Has header "execinfo.h" : YES 00:02:39.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.548 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.548 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.548 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.548 Run-time dependency openssl found: YES 3.1.1 00:02:39.548 Run-time dependency libpcap found: YES 1.10.4 00:02:39.548 Has header "pcap.h" with dependency libpcap: YES 00:02:39.548 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.548 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.548 Compiler for C supports arguments -Wformat: YES 00:02:39.548 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.548 Compiler for C supports arguments -Wformat-security: NO 00:02:39.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.548 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.548 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.548 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.548 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.548 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.548 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.548 Compiler for C supports arguments -Wundef: YES 00:02:39.548 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.548 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.548 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.548 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.548 Program objdump found: YES (/usr/bin/objdump) 00:02:39.548 Compiler for C supports arguments -mavx512f: YES 00:02:39.548 Checking if "AVX512 checking" compiles: YES 00:02:39.548 Fetching value of define "__SSE4_2__" : 1 00:02:39.548 Fetching value of define "__AES__" : 1 00:02:39.548 Fetching value of define "__AVX__" : 1 00:02:39.548 Fetching value of define "__AVX2__" : 1 00:02:39.548 Fetching value of define "__AVX512BW__" : 1 00:02:39.548 Fetching value of define "__AVX512CD__" : 1 00:02:39.548 Fetching value of define "__AVX512DQ__" : 1 00:02:39.548 Fetching value of define "__AVX512F__" : 1 00:02:39.549 Fetching value of define "__AVX512VL__" : 1 00:02:39.549 Fetching value of define "__PCLMUL__" : 1 00:02:39.549 Fetching value of define "__RDRND__" : 1 00:02:39.549 Fetching value of define "__RDSEED__" : 1 00:02:39.549 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:39.549 Fetching value of define "__znver1__" : (undefined) 00:02:39.549 Fetching value of define "__znver2__" : (undefined) 00:02:39.549 Fetching value of define "__znver3__" : (undefined) 00:02:39.549 Fetching value of define "__znver4__" : (undefined) 00:02:39.549 Library asan found: YES 00:02:39.549 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.549 Message: lib/log: Defining dependency "log" 00:02:39.549 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.549 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.549 Library rt found: YES 00:02:39.549 Checking for function "getentropy" : NO 00:02:39.549 Message: lib/eal: Defining dependency "eal" 00:02:39.549 Message: lib/ring: Defining dependency "ring" 00:02:39.549 Message: lib/rcu: Defining dependency "rcu" 00:02:39.549 Message: lib/mempool: Defining dependency "mempool" 00:02:39.549 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.549 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.549 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.549 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.549 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.549 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.549 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:39.549 Compiler for C supports arguments -mpclmul: YES 00:02:39.549 Compiler for C supports arguments -maes: YES 00:02:39.549 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.549 Compiler for C supports arguments -mavx512bw: YES 00:02:39.549 Compiler for C supports arguments -mavx512dq: YES 00:02:39.549 Compiler for C supports arguments -mavx512vl: YES 00:02:39.549 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.549 Compiler for C supports arguments -mavx2: YES 00:02:39.549 Compiler for C supports arguments -mavx: YES 00:02:39.549 Message: lib/net: Defining dependency "net" 00:02:39.549 Message: lib/meter: Defining dependency "meter" 00:02:39.549 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.549 Message: lib/pci: Defining dependency "pci" 00:02:39.549 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.549 Message: lib/hash: Defining dependency "hash" 00:02:39.549 Message: lib/timer: Defining dependency "timer" 00:02:39.549 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.549 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.549 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.549 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.549 Message: lib/power: Defining dependency "power" 00:02:39.549 Message: lib/reorder: Defining dependency "reorder" 00:02:39.549 Message: lib/security: Defining dependency "security" 00:02:39.549 Has header "linux/userfaultfd.h" : YES 00:02:39.549 Has header "linux/vduse.h" : YES 00:02:39.549 Message: lib/vhost: Defining dependency "vhost" 00:02:39.549 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.549 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.549 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.549 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.549 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.549 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.549 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.549 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.549 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.549 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.549 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.549 Configuring doxy-api-html.conf using configuration 00:02:39.549 Configuring doxy-api-man.conf using configuration 00:02:39.549 Program mandb found: YES (/usr/bin/mandb) 00:02:39.549 Program sphinx-build found: NO 00:02:39.549 Configuring rte_build_config.h using configuration 00:02:39.549 Message: 00:02:39.549 ================= 00:02:39.549 Applications Enabled 00:02:39.549 ================= 00:02:39.549 00:02:39.549 apps: 00:02:39.549 00:02:39.549 00:02:39.549 Message: 00:02:39.549 ================= 00:02:39.549 Libraries Enabled 00:02:39.549 ================= 00:02:39.549 00:02:39.549 libs: 00:02:39.549 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.549 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.549 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.549 00:02:39.549 Message: 00:02:39.549 =============== 00:02:39.549 Drivers Enabled 00:02:39.549 =============== 00:02:39.549 00:02:39.549 common: 00:02:39.549 00:02:39.549 bus: 00:02:39.549 pci, vdev, 00:02:39.549 mempool: 00:02:39.549 ring, 00:02:39.549 dma: 00:02:39.549 00:02:39.549 net: 00:02:39.549 00:02:39.549 crypto: 00:02:39.549 00:02:39.549 compress: 00:02:39.549 00:02:39.549 vdpa: 00:02:39.549 00:02:39.549 00:02:39.549 Message: 00:02:39.549 ================= 00:02:39.549 Content Skipped 00:02:39.549 ================= 00:02:39.549 00:02:39.549 apps: 00:02:39.549 dumpcap: explicitly disabled via build config 00:02:39.549 graph: explicitly disabled via build config 00:02:39.549 pdump: explicitly disabled via build config 00:02:39.549 proc-info: explicitly disabled via build config 00:02:39.549 test-acl: explicitly disabled via build config 00:02:39.549 test-bbdev: explicitly disabled via build config 00:02:39.549 test-cmdline: explicitly disabled via build config 00:02:39.549 test-compress-perf: explicitly disabled via build config 00:02:39.549 test-crypto-perf: explicitly disabled via build config 00:02:39.549 test-dma-perf: explicitly disabled via build config 00:02:39.549 test-eventdev: explicitly disabled via build config 00:02:39.549 test-fib: explicitly disabled via build config 00:02:39.549 test-flow-perf: explicitly disabled via build config 00:02:39.549 test-gpudev: explicitly disabled via build config 00:02:39.549 test-mldev: explicitly disabled via build config 00:02:39.549 test-pipeline: explicitly disabled via build config 00:02:39.549 test-pmd: explicitly disabled via build config 00:02:39.549 test-regex: explicitly disabled via build config 00:02:39.549 test-sad: explicitly disabled via build config 00:02:39.549 test-security-perf: explicitly disabled via build config 00:02:39.549 00:02:39.549 libs: 00:02:39.549 argparse: explicitly disabled via build config 00:02:39.549 metrics: explicitly disabled via build config 00:02:39.549 acl: explicitly disabled via build config 00:02:39.549 bbdev: explicitly disabled via build config 00:02:39.549 bitratestats: explicitly disabled via build config 00:02:39.549 bpf: explicitly disabled via build config 00:02:39.549 cfgfile: explicitly disabled via build config 00:02:39.549 distributor: explicitly disabled via build config 00:02:39.549 efd: explicitly disabled via build config 00:02:39.549 eventdev: explicitly disabled via build config 00:02:39.549 dispatcher: explicitly disabled via build config 00:02:39.549 gpudev: explicitly disabled via build config 00:02:39.549 gro: explicitly disabled via build config 00:02:39.549 gso: explicitly disabled via build config 00:02:39.549 ip_frag: explicitly disabled via build config 00:02:39.549 jobstats: explicitly disabled via build config 00:02:39.549 latencystats: explicitly disabled via build config 00:02:39.549 lpm: explicitly disabled via build config 00:02:39.549 member: explicitly disabled via build config 00:02:39.549 pcapng: explicitly disabled via build config 00:02:39.549 rawdev: explicitly disabled via build config 00:02:39.549 regexdev: explicitly disabled via build config 00:02:39.549 mldev: explicitly disabled via build config 00:02:39.549 rib: explicitly disabled via build config 00:02:39.549 sched: explicitly disabled via build config 00:02:39.549 stack: explicitly disabled via build config 00:02:39.549 ipsec: explicitly disabled via build config 00:02:39.549 pdcp: explicitly disabled via build config 00:02:39.549 fib: explicitly disabled via build config 00:02:39.549 port: explicitly disabled via build config 00:02:39.549 pdump: explicitly disabled via build config 00:02:39.549 table: explicitly disabled via build config 00:02:39.549 pipeline: explicitly disabled via build config 00:02:39.549 graph: explicitly disabled via build config 00:02:39.549 node: explicitly disabled via build config 00:02:39.549 00:02:39.549 drivers: 00:02:39.549 common/cpt: not in enabled drivers build config 00:02:39.549 common/dpaax: not in enabled drivers build config 00:02:39.549 common/iavf: not in enabled drivers build config 00:02:39.549 common/idpf: not in enabled drivers build config 00:02:39.549 common/ionic: not in enabled drivers build config 00:02:39.549 common/mvep: not in enabled drivers build config 00:02:39.549 common/octeontx: not in enabled drivers build config 00:02:39.549 bus/auxiliary: not in enabled drivers build config 00:02:39.549 bus/cdx: not in enabled drivers build config 00:02:39.549 bus/dpaa: not in enabled drivers build config 00:02:39.549 bus/fslmc: not in enabled drivers build config 00:02:39.549 bus/ifpga: not in enabled drivers build config 00:02:39.549 bus/platform: not in enabled drivers build config 00:02:39.549 bus/uacce: not in enabled drivers build config 00:02:39.549 bus/vmbus: not in enabled drivers build config 00:02:39.549 common/cnxk: not in enabled drivers build config 00:02:39.549 common/mlx5: not in enabled drivers build config 00:02:39.549 common/nfp: not in enabled drivers build config 00:02:39.549 common/nitrox: not in enabled drivers build config 00:02:39.549 common/qat: not in enabled drivers build config 00:02:39.549 common/sfc_efx: not in enabled drivers build config 00:02:39.549 mempool/bucket: not in enabled drivers build config 00:02:39.549 mempool/cnxk: not in enabled drivers build config 00:02:39.549 mempool/dpaa: not in enabled drivers build config 00:02:39.550 mempool/dpaa2: not in enabled drivers build config 00:02:39.550 mempool/octeontx: not in enabled drivers build config 00:02:39.550 mempool/stack: not in enabled drivers build config 00:02:39.550 dma/cnxk: not in enabled drivers build config 00:02:39.550 dma/dpaa: not in enabled drivers build config 00:02:39.550 dma/dpaa2: not in enabled drivers build config 00:02:39.550 dma/hisilicon: not in enabled drivers build config 00:02:39.550 dma/idxd: not in enabled drivers build config 00:02:39.550 dma/ioat: not in enabled drivers build config 00:02:39.550 dma/skeleton: not in enabled drivers build config 00:02:39.550 net/af_packet: not in enabled drivers build config 00:02:39.550 net/af_xdp: not in enabled drivers build config 00:02:39.550 net/ark: not in enabled drivers build config 00:02:39.550 net/atlantic: not in enabled drivers build config 00:02:39.550 net/avp: not in enabled drivers build config 00:02:39.550 net/axgbe: not in enabled drivers build config 00:02:39.550 net/bnx2x: not in enabled drivers build config 00:02:39.550 net/bnxt: not in enabled drivers build config 00:02:39.550 net/bonding: not in enabled drivers build config 00:02:39.550 net/cnxk: not in enabled drivers build config 00:02:39.550 net/cpfl: not in enabled drivers build config 00:02:39.550 net/cxgbe: not in enabled drivers build config 00:02:39.550 net/dpaa: not in enabled drivers build config 00:02:39.550 net/dpaa2: not in enabled drivers build config 00:02:39.550 net/e1000: not in enabled drivers build config 00:02:39.550 net/ena: not in enabled drivers build config 00:02:39.550 net/enetc: not in enabled drivers build config 00:02:39.550 net/enetfec: not in enabled drivers build config 00:02:39.550 net/enic: not in enabled drivers build config 00:02:39.550 net/failsafe: not in enabled drivers build config 00:02:39.550 net/fm10k: not in enabled drivers build config 00:02:39.550 net/gve: not in enabled drivers build config 00:02:39.550 net/hinic: not in enabled drivers build config 00:02:39.550 net/hns3: not in enabled drivers build config 00:02:39.550 net/i40e: not in enabled drivers build config 00:02:39.550 net/iavf: not in enabled drivers build config 00:02:39.550 net/ice: not in enabled drivers build config 00:02:39.550 net/idpf: not in enabled drivers build config 00:02:39.550 net/igc: not in enabled drivers build config 00:02:39.550 net/ionic: not in enabled drivers build config 00:02:39.550 net/ipn3ke: not in enabled drivers build config 00:02:39.550 net/ixgbe: not in enabled drivers build config 00:02:39.550 net/mana: not in enabled drivers build config 00:02:39.550 net/memif: not in enabled drivers build config 00:02:39.550 net/mlx4: not in enabled drivers build config 00:02:39.550 net/mlx5: not in enabled drivers build config 00:02:39.550 net/mvneta: not in enabled drivers build config 00:02:39.550 net/mvpp2: not in enabled drivers build config 00:02:39.550 net/netvsc: not in enabled drivers build config 00:02:39.550 net/nfb: not in enabled drivers build config 00:02:39.550 net/nfp: not in enabled drivers build config 00:02:39.550 net/ngbe: not in enabled drivers build config 00:02:39.550 net/null: not in enabled drivers build config 00:02:39.550 net/octeontx: not in enabled drivers build config 00:02:39.550 net/octeon_ep: not in enabled drivers build config 00:02:39.550 net/pcap: not in enabled drivers build config 00:02:39.550 net/pfe: not in enabled drivers build config 00:02:39.550 net/qede: not in enabled drivers build config 00:02:39.550 net/ring: not in enabled drivers build config 00:02:39.550 net/sfc: not in enabled drivers build config 00:02:39.550 net/softnic: not in enabled drivers build config 00:02:39.550 net/tap: not in enabled drivers build config 00:02:39.550 net/thunderx: not in enabled drivers build config 00:02:39.550 net/txgbe: not in enabled drivers build config 00:02:39.550 net/vdev_netvsc: not in enabled drivers build config 00:02:39.550 net/vhost: not in enabled drivers build config 00:02:39.550 net/virtio: not in enabled drivers build config 00:02:39.550 net/vmxnet3: not in enabled drivers build config 00:02:39.550 raw/*: missing internal dependency, "rawdev" 00:02:39.550 crypto/armv8: not in enabled drivers build config 00:02:39.550 crypto/bcmfs: not in enabled drivers build config 00:02:39.550 crypto/caam_jr: not in enabled drivers build config 00:02:39.550 crypto/ccp: not in enabled drivers build config 00:02:39.550 crypto/cnxk: not in enabled drivers build config 00:02:39.550 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.550 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.550 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.550 crypto/mlx5: not in enabled drivers build config 00:02:39.550 crypto/mvsam: not in enabled drivers build config 00:02:39.550 crypto/nitrox: not in enabled drivers build config 00:02:39.550 crypto/null: not in enabled drivers build config 00:02:39.550 crypto/octeontx: not in enabled drivers build config 00:02:39.550 crypto/openssl: not in enabled drivers build config 00:02:39.550 crypto/scheduler: not in enabled drivers build config 00:02:39.550 crypto/uadk: not in enabled drivers build config 00:02:39.550 crypto/virtio: not in enabled drivers build config 00:02:39.550 compress/isal: not in enabled drivers build config 00:02:39.550 compress/mlx5: not in enabled drivers build config 00:02:39.550 compress/nitrox: not in enabled drivers build config 00:02:39.550 compress/octeontx: not in enabled drivers build config 00:02:39.550 compress/zlib: not in enabled drivers build config 00:02:39.550 regex/*: missing internal dependency, "regexdev" 00:02:39.550 ml/*: missing internal dependency, "mldev" 00:02:39.550 vdpa/ifc: not in enabled drivers build config 00:02:39.550 vdpa/mlx5: not in enabled drivers build config 00:02:39.550 vdpa/nfp: not in enabled drivers build config 00:02:39.550 vdpa/sfc: not in enabled drivers build config 00:02:39.550 event/*: missing internal dependency, "eventdev" 00:02:39.550 baseband/*: missing internal dependency, "bbdev" 00:02:39.550 gpu/*: missing internal dependency, "gpudev" 00:02:39.550 00:02:39.550 00:02:39.550 Build targets in project: 84 00:02:39.550 00:02:39.550 DPDK 24.03.0 00:02:39.550 00:02:39.550 User defined options 00:02:39.550 buildtype : debug 00:02:39.550 default_library : shared 00:02:39.550 libdir : lib 00:02:39.550 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.550 b_sanitize : address 00:02:39.550 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.550 c_link_args : 00:02:39.550 cpu_instruction_set: native 00:02:39.550 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:39.550 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:39.550 enable_docs : false 00:02:39.550 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:39.550 enable_kmods : false 00:02:39.550 max_lcores : 128 00:02:39.550 tests : false 00:02:39.550 00:02:39.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.808 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:39.808 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.808 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.808 [3/267] Linking static target lib/librte_kvargs.a 00:02:39.808 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.808 [5/267] Linking static target lib/librte_log.a 00:02:39.808 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.066 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.066 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.066 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.066 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.066 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.067 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.067 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.067 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.067 [15/267] Linking static target lib/librte_telemetry.a 00:02:40.067 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.325 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.325 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.325 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.583 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.584 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.584 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.584 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.584 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.584 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.584 [26/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.584 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.584 [28/267] Linking target lib/librte_log.so.24.1 00:02:40.584 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.842 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.842 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.842 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.842 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.842 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.842 [35/267] Linking target lib/librte_kvargs.so.24.1 00:02:40.842 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.842 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:40.842 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.101 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.101 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.101 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.101 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.101 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:41.101 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.101 [45/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:41.101 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.359 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.359 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.359 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.359 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.359 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.359 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.359 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.618 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.618 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.618 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.618 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.618 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.618 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.618 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.877 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.877 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.877 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.877 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.877 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.877 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.877 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.135 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.135 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.135 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.135 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.135 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.394 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.394 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.394 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.394 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.394 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.394 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.394 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.394 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.651 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.651 [82/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.651 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.651 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.651 [85/267] Linking static target lib/librte_ring.a 00:02:42.651 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.651 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.910 [88/267] Linking static target lib/librte_eal.a 00:02:42.910 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.910 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.910 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.910 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.910 [93/267] Linking static target lib/librte_mempool.a 00:02:42.910 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.910 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.910 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.910 [97/267] Linking static target lib/librte_rcu.a 00:02:43.169 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.169 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.169 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.169 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.427 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:43.427 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.427 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.427 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.427 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.427 [107/267] Linking static target lib/librte_net.a 00:02:43.427 [108/267] Linking static target lib/librte_meter.a 00:02:43.427 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.685 [110/267] Linking static target lib/librte_mbuf.a 00:02:43.685 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.685 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.946 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.946 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.946 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.946 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.946 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.207 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.207 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.207 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.465 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.465 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.465 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.734 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.734 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.734 [126/267] Linking static target lib/librte_pci.a 00:02:44.734 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.734 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.734 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.734 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.734 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.734 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.734 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.734 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.734 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.734 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.734 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.011 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.011 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.011 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.011 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.011 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.011 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.011 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.011 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.011 [146/267] Linking static target lib/librte_cmdline.a 00:02:45.011 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:45.270 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.270 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.270 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.528 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.528 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.528 [153/267] Linking static target lib/librte_timer.a 00:02:45.528 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.528 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.528 [156/267] Linking static target lib/librte_ethdev.a 00:02:45.528 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.785 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.785 [159/267] Linking static target lib/librte_hash.a 00:02:45.785 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.785 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.785 [162/267] Linking static target lib/librte_compressdev.a 00:02:45.785 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.785 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.043 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.043 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.043 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.043 [168/267] Linking static target lib/librte_dmadev.a 00:02:46.043 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.302 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.302 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.302 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.302 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.560 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.560 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.560 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:46.560 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.560 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.560 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.560 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.818 [181/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.818 [182/267] Linking static target lib/librte_cryptodev.a 00:02:46.818 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.818 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.818 [185/267] Linking static target lib/librte_power.a 00:02:47.077 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.077 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.077 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.077 [189/267] Linking static target lib/librte_reorder.a 00:02:47.077 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.335 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.335 [192/267] Linking static target lib/librte_security.a 00:02:47.335 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.594 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.854 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:47.854 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.854 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:47.854 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:47.854 [199/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.112 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.112 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.112 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.112 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.370 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.370 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.370 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.370 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.370 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.370 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.629 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.629 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.629 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.629 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.629 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.629 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:48.629 [216/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.629 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.629 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.629 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.629 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:48.887 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:48.887 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.887 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.887 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:48.887 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.145 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.403 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.776 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.776 [229/267] Linking target lib/librte_eal.so.24.1 00:02:50.776 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.776 [231/267] Linking target lib/librte_pci.so.24.1 00:02:50.776 [232/267] Linking target lib/librte_ring.so.24.1 00:02:50.776 [233/267] Linking target lib/librte_dmadev.so.24.1 00:02:50.776 [234/267] Linking target lib/librte_meter.so.24.1 00:02:50.776 [235/267] Linking target lib/librte_timer.so.24.1 00:02:50.776 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.035 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.035 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.035 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.035 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.035 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.035 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.035 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:51.035 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:51.035 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.035 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.035 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.293 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:51.293 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.293 [250/267] Linking target lib/librte_net.so.24.1 00:02:51.293 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:51.294 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:51.294 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:51.294 [254/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.553 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.553 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.553 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:51.553 [258/267] Linking target lib/librte_hash.so.24.1 00:02:51.553 [259/267] Linking target lib/librte_security.so.24.1 00:02:51.553 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:51.553 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:51.553 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:51.553 [263/267] Linking target lib/librte_power.so.24.1 00:02:52.118 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.118 [265/267] Linking static target lib/librte_vhost.a 00:02:53.493 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.493 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:53.493 INFO: autodetecting backend as ninja 00:02:53.493 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:05.695 CC lib/ut/ut.o 00:03:05.695 CC lib/log/log.o 00:03:05.695 CC lib/log/log_deprecated.o 00:03:05.695 CC lib/log/log_flags.o 00:03:05.695 CC lib/ut_mock/mock.o 00:03:05.695 LIB libspdk_ut.a 00:03:05.695 LIB libspdk_ut_mock.a 00:03:05.695 LIB libspdk_log.a 00:03:05.695 SO libspdk_ut.so.2.0 00:03:05.695 SO libspdk_ut_mock.so.6.0 00:03:05.695 SO libspdk_log.so.7.1 00:03:05.695 SYMLINK libspdk_ut_mock.so 00:03:05.695 SYMLINK libspdk_ut.so 00:03:05.695 SYMLINK libspdk_log.so 00:03:05.957 CC lib/dma/dma.o 00:03:05.957 CC lib/ioat/ioat.o 00:03:05.957 CXX lib/trace_parser/trace.o 00:03:05.957 CC lib/util/base64.o 00:03:05.957 CC lib/util/bit_array.o 00:03:05.957 CC lib/util/cpuset.o 00:03:05.957 CC lib/util/crc16.o 00:03:05.957 CC lib/util/crc32.o 00:03:05.957 CC lib/util/crc32c.o 00:03:05.957 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.957 CC lib/util/crc32_ieee.o 00:03:05.957 CC lib/util/crc64.o 00:03:05.957 CC lib/util/dif.o 00:03:05.957 CC lib/util/fd.o 00:03:05.957 LIB libspdk_dma.a 00:03:06.219 CC lib/util/fd_group.o 00:03:06.219 SO libspdk_dma.so.5.0 00:03:06.219 CC lib/vfio_user/host/vfio_user.o 00:03:06.219 CC lib/util/file.o 00:03:06.219 CC lib/util/hexlify.o 00:03:06.219 SYMLINK libspdk_dma.so 00:03:06.219 CC lib/util/iov.o 00:03:06.219 LIB libspdk_ioat.a 00:03:06.219 CC lib/util/math.o 00:03:06.219 SO libspdk_ioat.so.7.0 00:03:06.219 CC lib/util/net.o 00:03:06.219 SYMLINK libspdk_ioat.so 00:03:06.219 CC lib/util/pipe.o 00:03:06.219 CC lib/util/strerror_tls.o 00:03:06.219 CC lib/util/string.o 00:03:06.219 CC lib/util/uuid.o 00:03:06.219 LIB libspdk_vfio_user.a 00:03:06.219 CC lib/util/xor.o 00:03:06.480 CC lib/util/zipf.o 00:03:06.480 SO libspdk_vfio_user.so.5.0 00:03:06.480 CC lib/util/md5.o 00:03:06.480 SYMLINK libspdk_vfio_user.so 00:03:06.741 LIB libspdk_util.a 00:03:06.741 SO libspdk_util.so.10.1 00:03:07.001 LIB libspdk_trace_parser.a 00:03:07.001 SYMLINK libspdk_util.so 00:03:07.001 SO libspdk_trace_parser.so.6.0 00:03:07.001 SYMLINK libspdk_trace_parser.so 00:03:07.001 CC lib/rdma_utils/rdma_utils.o 00:03:07.001 CC lib/conf/conf.o 00:03:07.001 CC lib/vmd/led.o 00:03:07.001 CC lib/vmd/vmd.o 00:03:07.001 CC lib/env_dpdk/env.o 00:03:07.001 CC lib/idxd/idxd.o 00:03:07.001 CC lib/idxd/idxd_user.o 00:03:07.001 CC lib/env_dpdk/pci.o 00:03:07.001 CC lib/env_dpdk/memory.o 00:03:07.001 CC lib/json/json_parse.o 00:03:07.258 CC lib/json/json_util.o 00:03:07.258 LIB libspdk_conf.a 00:03:07.258 CC lib/idxd/idxd_kernel.o 00:03:07.258 SO libspdk_conf.so.6.0 00:03:07.258 CC lib/json/json_write.o 00:03:07.258 LIB libspdk_rdma_utils.a 00:03:07.258 SO libspdk_rdma_utils.so.1.0 00:03:07.259 SYMLINK libspdk_conf.so 00:03:07.259 CC lib/env_dpdk/init.o 00:03:07.259 CC lib/env_dpdk/threads.o 00:03:07.259 SYMLINK libspdk_rdma_utils.so 00:03:07.259 CC lib/env_dpdk/pci_ioat.o 00:03:07.517 CC lib/env_dpdk/pci_virtio.o 00:03:07.517 CC lib/env_dpdk/pci_vmd.o 00:03:07.517 CC lib/env_dpdk/pci_idxd.o 00:03:07.517 CC lib/env_dpdk/pci_event.o 00:03:07.517 CC lib/env_dpdk/sigbus_handler.o 00:03:07.517 CC lib/env_dpdk/pci_dpdk.o 00:03:07.517 LIB libspdk_json.a 00:03:07.517 LIB libspdk_vmd.a 00:03:07.517 SO libspdk_json.so.6.0 00:03:07.517 SO libspdk_vmd.so.6.0 00:03:07.517 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.776 SYMLINK libspdk_vmd.so 00:03:07.776 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.776 LIB libspdk_idxd.a 00:03:07.776 SYMLINK libspdk_json.so 00:03:07.776 SO libspdk_idxd.so.12.1 00:03:07.776 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:07.776 CC lib/rdma_provider/common.o 00:03:07.776 SYMLINK libspdk_idxd.so 00:03:07.776 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.776 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.776 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.776 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.776 LIB libspdk_rdma_provider.a 00:03:08.034 SO libspdk_rdma_provider.so.7.0 00:03:08.034 SYMLINK libspdk_rdma_provider.so 00:03:08.034 LIB libspdk_jsonrpc.a 00:03:08.034 SO libspdk_jsonrpc.so.6.0 00:03:08.295 SYMLINK libspdk_jsonrpc.so 00:03:08.295 CC lib/rpc/rpc.o 00:03:08.555 LIB libspdk_env_dpdk.a 00:03:08.555 SO libspdk_env_dpdk.so.15.1 00:03:08.555 LIB libspdk_rpc.a 00:03:08.555 SO libspdk_rpc.so.6.0 00:03:08.817 SYMLINK libspdk_env_dpdk.so 00:03:08.817 SYMLINK libspdk_rpc.so 00:03:08.817 CC lib/keyring/keyring.o 00:03:08.817 CC lib/trace/trace_flags.o 00:03:08.817 CC lib/notify/notify.o 00:03:08.817 CC lib/keyring/keyring_rpc.o 00:03:08.817 CC lib/trace/trace.o 00:03:08.817 CC lib/notify/notify_rpc.o 00:03:08.817 CC lib/trace/trace_rpc.o 00:03:09.078 LIB libspdk_notify.a 00:03:09.078 SO libspdk_notify.so.6.0 00:03:09.078 LIB libspdk_keyring.a 00:03:09.078 SYMLINK libspdk_notify.so 00:03:09.078 SO libspdk_keyring.so.2.0 00:03:09.339 LIB libspdk_trace.a 00:03:09.339 SYMLINK libspdk_keyring.so 00:03:09.339 SO libspdk_trace.so.11.0 00:03:09.339 SYMLINK libspdk_trace.so 00:03:09.600 CC lib/thread/thread.o 00:03:09.600 CC lib/thread/iobuf.o 00:03:09.600 CC lib/sock/sock.o 00:03:09.600 CC lib/sock/sock_rpc.o 00:03:10.173 LIB libspdk_sock.a 00:03:10.173 SO libspdk_sock.so.10.0 00:03:10.173 SYMLINK libspdk_sock.so 00:03:10.435 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.435 CC lib/nvme/nvme_ctrlr.o 00:03:10.435 CC lib/nvme/nvme_fabric.o 00:03:10.435 CC lib/nvme/nvme_ns_cmd.o 00:03:10.435 CC lib/nvme/nvme_pcie_common.o 00:03:10.435 CC lib/nvme/nvme_ns.o 00:03:10.435 CC lib/nvme/nvme_pcie.o 00:03:10.435 CC lib/nvme/nvme.o 00:03:10.435 CC lib/nvme/nvme_qpair.o 00:03:11.009 CC lib/nvme/nvme_quirks.o 00:03:11.009 CC lib/nvme/nvme_transport.o 00:03:11.268 CC lib/nvme/nvme_discovery.o 00:03:11.268 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.268 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.268 CC lib/nvme/nvme_tcp.o 00:03:11.268 CC lib/nvme/nvme_opal.o 00:03:11.268 LIB libspdk_thread.a 00:03:11.268 SO libspdk_thread.so.11.0 00:03:11.528 SYMLINK libspdk_thread.so 00:03:11.528 CC lib/nvme/nvme_io_msg.o 00:03:11.528 CC lib/nvme/nvme_poll_group.o 00:03:11.528 CC lib/nvme/nvme_zns.o 00:03:11.528 CC lib/nvme/nvme_stubs.o 00:03:11.787 CC lib/nvme/nvme_auth.o 00:03:11.787 CC lib/nvme/nvme_cuse.o 00:03:11.787 CC lib/nvme/nvme_rdma.o 00:03:12.046 CC lib/accel/accel.o 00:03:12.046 CC lib/blob/blobstore.o 00:03:12.046 CC lib/init/json_config.o 00:03:12.304 CC lib/fsdev/fsdev.o 00:03:12.304 CC lib/virtio/virtio.o 00:03:12.304 CC lib/init/subsystem.o 00:03:12.304 CC lib/blob/request.o 00:03:12.563 CC lib/init/subsystem_rpc.o 00:03:12.563 CC lib/virtio/virtio_vhost_user.o 00:03:12.563 CC lib/init/rpc.o 00:03:12.563 CC lib/accel/accel_rpc.o 00:03:12.563 CC lib/blob/zeroes.o 00:03:12.563 CC lib/blob/blob_bs_dev.o 00:03:12.563 LIB libspdk_init.a 00:03:12.563 SO libspdk_init.so.6.0 00:03:12.822 CC lib/accel/accel_sw.o 00:03:12.822 CC lib/fsdev/fsdev_io.o 00:03:12.822 CC lib/fsdev/fsdev_rpc.o 00:03:12.822 SYMLINK libspdk_init.so 00:03:12.822 CC lib/virtio/virtio_vfio_user.o 00:03:12.822 CC lib/virtio/virtio_pci.o 00:03:12.822 CC lib/event/reactor.o 00:03:12.822 CC lib/event/log_rpc.o 00:03:12.822 CC lib/event/app.o 00:03:13.081 CC lib/event/app_rpc.o 00:03:13.081 LIB libspdk_fsdev.a 00:03:13.081 CC lib/event/scheduler_static.o 00:03:13.081 SO libspdk_fsdev.so.2.0 00:03:13.081 LIB libspdk_accel.a 00:03:13.081 SO libspdk_accel.so.16.0 00:03:13.081 SYMLINK libspdk_fsdev.so 00:03:13.081 LIB libspdk_nvme.a 00:03:13.081 LIB libspdk_virtio.a 00:03:13.081 SYMLINK libspdk_accel.so 00:03:13.081 SO libspdk_virtio.so.7.0 00:03:13.081 SYMLINK libspdk_virtio.so 00:03:13.081 SO libspdk_nvme.so.15.0 00:03:13.081 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.339 CC lib/bdev/bdev.o 00:03:13.339 CC lib/bdev/bdev_rpc.o 00:03:13.339 CC lib/bdev/bdev_zone.o 00:03:13.339 CC lib/bdev/part.o 00:03:13.339 CC lib/bdev/scsi_nvme.o 00:03:13.339 LIB libspdk_event.a 00:03:13.339 SO libspdk_event.so.14.0 00:03:13.339 SYMLINK libspdk_nvme.so 00:03:13.339 SYMLINK libspdk_event.so 00:03:13.660 LIB libspdk_fuse_dispatcher.a 00:03:13.943 SO libspdk_fuse_dispatcher.so.1.0 00:03:13.943 SYMLINK libspdk_fuse_dispatcher.so 00:03:14.879 LIB libspdk_blob.a 00:03:14.879 SO libspdk_blob.so.11.0 00:03:15.137 SYMLINK libspdk_blob.so 00:03:15.137 CC lib/lvol/lvol.o 00:03:15.137 CC lib/blobfs/blobfs.o 00:03:15.137 CC lib/blobfs/tree.o 00:03:16.071 LIB libspdk_bdev.a 00:03:16.071 SO libspdk_bdev.so.17.0 00:03:16.071 SYMLINK libspdk_bdev.so 00:03:16.071 LIB libspdk_lvol.a 00:03:16.071 SO libspdk_lvol.so.10.0 00:03:16.071 SYMLINK libspdk_lvol.so 00:03:16.071 LIB libspdk_blobfs.a 00:03:16.071 SO libspdk_blobfs.so.10.0 00:03:16.071 CC lib/nvmf/ctrlr.o 00:03:16.071 CC lib/nvmf/ctrlr_discovery.o 00:03:16.071 CC lib/nvmf/ctrlr_bdev.o 00:03:16.071 CC lib/nvmf/subsystem.o 00:03:16.071 CC lib/nvmf/nvmf.o 00:03:16.071 CC lib/nbd/nbd.o 00:03:16.071 CC lib/ublk/ublk.o 00:03:16.071 CC lib/ftl/ftl_core.o 00:03:16.071 CC lib/scsi/dev.o 00:03:16.071 SYMLINK libspdk_blobfs.so 00:03:16.071 CC lib/scsi/lun.o 00:03:16.329 CC lib/scsi/port.o 00:03:16.329 CC lib/ftl/ftl_init.o 00:03:16.329 CC lib/ftl/ftl_layout.o 00:03:16.329 CC lib/scsi/scsi.o 00:03:16.588 CC lib/nbd/nbd_rpc.o 00:03:16.588 CC lib/ftl/ftl_debug.o 00:03:16.588 CC lib/scsi/scsi_bdev.o 00:03:16.588 CC lib/ublk/ublk_rpc.o 00:03:16.588 CC lib/nvmf/nvmf_rpc.o 00:03:16.588 LIB libspdk_nbd.a 00:03:16.588 SO libspdk_nbd.so.7.0 00:03:16.846 CC lib/ftl/ftl_io.o 00:03:16.846 SYMLINK libspdk_nbd.so 00:03:16.846 CC lib/nvmf/transport.o 00:03:16.846 CC lib/ftl/ftl_sb.o 00:03:16.846 LIB libspdk_ublk.a 00:03:16.846 CC lib/nvmf/tcp.o 00:03:16.846 SO libspdk_ublk.so.3.0 00:03:16.846 CC lib/scsi/scsi_pr.o 00:03:16.846 SYMLINK libspdk_ublk.so 00:03:16.846 CC lib/nvmf/stubs.o 00:03:16.846 CC lib/nvmf/mdns_server.o 00:03:16.846 CC lib/ftl/ftl_l2p.o 00:03:17.103 CC lib/scsi/scsi_rpc.o 00:03:17.103 CC lib/ftl/ftl_l2p_flat.o 00:03:17.103 CC lib/scsi/task.o 00:03:17.103 CC lib/nvmf/rdma.o 00:03:17.103 CC lib/nvmf/auth.o 00:03:17.361 CC lib/ftl/ftl_nv_cache.o 00:03:17.361 CC lib/ftl/ftl_band.o 00:03:17.361 CC lib/ftl/ftl_band_ops.o 00:03:17.361 LIB libspdk_scsi.a 00:03:17.361 CC lib/ftl/ftl_writer.o 00:03:17.361 SO libspdk_scsi.so.9.0 00:03:17.619 SYMLINK libspdk_scsi.so 00:03:17.619 CC lib/ftl/ftl_rq.o 00:03:17.619 CC lib/ftl/ftl_reloc.o 00:03:17.619 CC lib/ftl/ftl_l2p_cache.o 00:03:17.619 CC lib/ftl/ftl_p2l.o 00:03:17.619 CC lib/ftl/ftl_p2l_log.o 00:03:17.619 CC lib/iscsi/conn.o 00:03:17.877 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.877 CC lib/iscsi/init_grp.o 00:03:17.877 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.135 CC lib/vhost/vhost.o 00:03:18.135 CC lib/vhost/vhost_rpc.o 00:03:18.135 CC lib/iscsi/iscsi.o 00:03:18.135 CC lib/iscsi/param.o 00:03:18.135 CC lib/iscsi/portal_grp.o 00:03:18.135 CC lib/iscsi/tgt_node.o 00:03:18.135 CC lib/iscsi/iscsi_subsystem.o 00:03:18.135 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.393 CC lib/iscsi/iscsi_rpc.o 00:03:18.393 CC lib/vhost/vhost_scsi.o 00:03:18.393 CC lib/vhost/vhost_blk.o 00:03:18.393 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.393 CC lib/vhost/rte_vhost_user.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.651 CC lib/iscsi/task.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.651 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.909 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.909 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.909 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.909 LIB libspdk_nvmf.a 00:03:18.909 CC lib/ftl/utils/ftl_conf.o 00:03:18.909 CC lib/ftl/utils/ftl_md.o 00:03:18.909 CC lib/ftl/utils/ftl_mempool.o 00:03:18.909 SO libspdk_nvmf.so.20.0 00:03:18.909 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.909 CC lib/ftl/utils/ftl_property.o 00:03:19.166 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.166 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.166 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.166 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.166 SYMLINK libspdk_nvmf.so 00:03:19.166 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.166 LIB libspdk_vhost.a 00:03:19.166 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.166 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:19.166 SO libspdk_vhost.so.8.0 00:03:19.166 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.166 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.166 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.166 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.423 SYMLINK libspdk_vhost.so 00:03:19.423 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:19.423 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:19.423 CC lib/ftl/base/ftl_base_dev.o 00:03:19.423 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.423 CC lib/ftl/ftl_trace.o 00:03:19.423 LIB libspdk_iscsi.a 00:03:19.423 SO libspdk_iscsi.so.8.0 00:03:19.423 LIB libspdk_ftl.a 00:03:19.681 SYMLINK libspdk_iscsi.so 00:03:19.681 SO libspdk_ftl.so.9.0 00:03:19.940 SYMLINK libspdk_ftl.so 00:03:20.198 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.198 CC module/keyring/file/keyring.o 00:03:20.198 CC module/accel/iaa/accel_iaa.o 00:03:20.198 CC module/accel/ioat/accel_ioat.o 00:03:20.198 CC module/accel/dsa/accel_dsa.o 00:03:20.198 CC module/blob/bdev/blob_bdev.o 00:03:20.198 CC module/fsdev/aio/fsdev_aio.o 00:03:20.198 CC module/accel/error/accel_error.o 00:03:20.198 CC module/sock/posix/posix.o 00:03:20.198 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.198 LIB libspdk_env_dpdk_rpc.a 00:03:20.198 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.457 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.457 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.457 CC module/keyring/file/keyring_rpc.o 00:03:20.457 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.457 LIB libspdk_scheduler_dynamic.a 00:03:20.457 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.457 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.457 CC module/accel/error/accel_error_rpc.o 00:03:20.457 LIB libspdk_blob_bdev.a 00:03:20.457 LIB libspdk_accel_ioat.a 00:03:20.457 SO libspdk_blob_bdev.so.11.0 00:03:20.457 SO libspdk_accel_ioat.so.6.0 00:03:20.457 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.457 LIB libspdk_keyring_file.a 00:03:20.457 SYMLINK libspdk_blob_bdev.so 00:03:20.457 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:20.457 SYMLINK libspdk_accel_ioat.so 00:03:20.457 SO libspdk_keyring_file.so.2.0 00:03:20.457 LIB libspdk_accel_dsa.a 00:03:20.457 LIB libspdk_accel_error.a 00:03:20.457 LIB libspdk_accel_iaa.a 00:03:20.457 SO libspdk_accel_error.so.2.0 00:03:20.457 SO libspdk_accel_dsa.so.5.0 00:03:20.457 SYMLINK libspdk_keyring_file.so 00:03:20.715 SO libspdk_accel_iaa.so.3.0 00:03:20.715 SYMLINK libspdk_accel_error.so 00:03:20.715 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.715 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.715 SYMLINK libspdk_accel_dsa.so 00:03:20.715 CC module/fsdev/aio/linux_aio_mgr.o 00:03:20.715 CC module/keyring/linux/keyring.o 00:03:20.715 SYMLINK libspdk_accel_iaa.so 00:03:20.715 CC module/keyring/linux/keyring_rpc.o 00:03:20.715 LIB libspdk_scheduler_gscheduler.a 00:03:20.715 SO libspdk_scheduler_gscheduler.so.4.0 00:03:20.715 LIB libspdk_keyring_linux.a 00:03:20.715 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.715 LIB libspdk_fsdev_aio.a 00:03:20.715 SYMLINK libspdk_scheduler_gscheduler.so 00:03:20.715 SO libspdk_keyring_linux.so.1.0 00:03:20.715 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.715 CC module/bdev/error/vbdev_error.o 00:03:20.715 SO libspdk_fsdev_aio.so.1.0 00:03:20.715 CC module/bdev/delay/vbdev_delay.o 00:03:20.715 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:20.715 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.973 SYMLINK libspdk_keyring_linux.so 00:03:20.973 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:20.973 CC module/bdev/lvol/vbdev_lvol.o 00:03:20.973 SYMLINK libspdk_fsdev_aio.so 00:03:20.973 CC module/bdev/gpt/gpt.o 00:03:20.973 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.973 CC module/bdev/malloc/bdev_malloc.o 00:03:20.973 CC module/bdev/null/bdev_null.o 00:03:20.973 CC module/bdev/null/bdev_null_rpc.o 00:03:20.973 LIB libspdk_blobfs_bdev.a 00:03:20.973 LIB libspdk_sock_posix.a 00:03:20.973 SO libspdk_blobfs_bdev.so.6.0 00:03:20.973 SO libspdk_sock_posix.so.6.0 00:03:20.973 CC module/bdev/gpt/vbdev_gpt.o 00:03:20.973 SYMLINK libspdk_blobfs_bdev.so 00:03:20.973 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.973 SYMLINK libspdk_sock_posix.so 00:03:20.973 CC module/bdev/error/vbdev_error_rpc.o 00:03:20.973 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.231 LIB libspdk_bdev_delay.a 00:03:21.231 LIB libspdk_bdev_error.a 00:03:21.231 CC module/bdev/nvme/bdev_nvme.o 00:03:21.231 SO libspdk_bdev_delay.so.6.0 00:03:21.231 SO libspdk_bdev_error.so.6.0 00:03:21.231 LIB libspdk_bdev_null.a 00:03:21.231 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.231 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.231 SO libspdk_bdev_null.so.6.0 00:03:21.231 CC module/bdev/nvme/nvme_rpc.o 00:03:21.231 LIB libspdk_bdev_gpt.a 00:03:21.231 LIB libspdk_bdev_malloc.a 00:03:21.231 SYMLINK libspdk_bdev_error.so 00:03:21.231 CC module/bdev/nvme/bdev_mdns_client.o 00:03:21.231 SO libspdk_bdev_gpt.so.6.0 00:03:21.231 SYMLINK libspdk_bdev_delay.so 00:03:21.231 SO libspdk_bdev_malloc.so.6.0 00:03:21.231 SYMLINK libspdk_bdev_null.so 00:03:21.231 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.231 CC module/bdev/nvme/vbdev_opal.o 00:03:21.231 SYMLINK libspdk_bdev_gpt.so 00:03:21.231 SYMLINK libspdk_bdev_malloc.so 00:03:21.489 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:21.489 LIB libspdk_bdev_lvol.a 00:03:21.489 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:21.489 SO libspdk_bdev_lvol.so.6.0 00:03:21.489 CC module/bdev/split/vbdev_split.o 00:03:21.489 CC module/bdev/raid/bdev_raid.o 00:03:21.489 LIB libspdk_bdev_passthru.a 00:03:21.489 SYMLINK libspdk_bdev_lvol.so 00:03:21.489 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.489 SO libspdk_bdev_passthru.so.6.0 00:03:21.489 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.489 CC module/bdev/raid/bdev_raid_sb.o 00:03:21.489 CC module/bdev/raid/raid0.o 00:03:21.489 SYMLINK libspdk_bdev_passthru.so 00:03:21.489 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.746 LIB libspdk_bdev_split.a 00:03:21.746 SO libspdk_bdev_split.so.6.0 00:03:21.746 CC module/bdev/xnvme/bdev_xnvme.o 00:03:21.746 SYMLINK libspdk_bdev_split.so 00:03:21.746 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:21.746 CC module/bdev/aio/bdev_aio.o 00:03:21.746 CC module/bdev/raid/raid1.o 00:03:21.746 CC module/bdev/raid/concat.o 00:03:21.746 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.746 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.746 LIB libspdk_bdev_zone_block.a 00:03:21.746 LIB libspdk_bdev_xnvme.a 00:03:22.004 SO libspdk_bdev_zone_block.so.6.0 00:03:22.004 SO libspdk_bdev_xnvme.so.3.0 00:03:22.004 SYMLINK libspdk_bdev_zone_block.so 00:03:22.004 CC module/bdev/ftl/bdev_ftl.o 00:03:22.004 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.004 SYMLINK libspdk_bdev_xnvme.so 00:03:22.004 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.004 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.004 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.004 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.004 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.004 LIB libspdk_bdev_aio.a 00:03:22.004 SO libspdk_bdev_aio.so.6.0 00:03:22.004 SYMLINK libspdk_bdev_aio.so 00:03:22.263 LIB libspdk_bdev_ftl.a 00:03:22.263 SO libspdk_bdev_ftl.so.6.0 00:03:22.263 SYMLINK libspdk_bdev_ftl.so 00:03:22.263 LIB libspdk_bdev_raid.a 00:03:22.263 SO libspdk_bdev_raid.so.6.0 00:03:22.263 LIB libspdk_bdev_iscsi.a 00:03:22.263 LIB libspdk_bdev_virtio.a 00:03:22.263 SO libspdk_bdev_iscsi.so.6.0 00:03:22.521 SYMLINK libspdk_bdev_raid.so 00:03:22.521 SO libspdk_bdev_virtio.so.6.0 00:03:22.521 SYMLINK libspdk_bdev_iscsi.so 00:03:22.521 SYMLINK libspdk_bdev_virtio.so 00:03:23.456 LIB libspdk_bdev_nvme.a 00:03:23.715 SO libspdk_bdev_nvme.so.7.1 00:03:23.715 SYMLINK libspdk_bdev_nvme.so 00:03:23.973 CC module/event/subsystems/keyring/keyring.o 00:03:23.973 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.973 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.973 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.973 CC module/event/subsystems/sock/sock.o 00:03:23.973 CC module/event/subsystems/vmd/vmd.o 00:03:23.973 CC module/event/subsystems/fsdev/fsdev.o 00:03:23.973 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.231 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.231 LIB libspdk_event_scheduler.a 00:03:24.231 LIB libspdk_event_fsdev.a 00:03:24.231 LIB libspdk_event_iobuf.a 00:03:24.231 LIB libspdk_event_keyring.a 00:03:24.231 SO libspdk_event_scheduler.so.4.0 00:03:24.232 LIB libspdk_event_vmd.a 00:03:24.232 LIB libspdk_event_sock.a 00:03:24.232 SO libspdk_event_fsdev.so.1.0 00:03:24.232 SO libspdk_event_keyring.so.1.0 00:03:24.232 SO libspdk_event_iobuf.so.3.0 00:03:24.232 LIB libspdk_event_vhost_blk.a 00:03:24.232 SO libspdk_event_sock.so.5.0 00:03:24.232 SO libspdk_event_vmd.so.6.0 00:03:24.232 SYMLINK libspdk_event_scheduler.so 00:03:24.232 SO libspdk_event_vhost_blk.so.3.0 00:03:24.232 SYMLINK libspdk_event_fsdev.so 00:03:24.232 SYMLINK libspdk_event_iobuf.so 00:03:24.232 SYMLINK libspdk_event_keyring.so 00:03:24.232 SYMLINK libspdk_event_sock.so 00:03:24.232 SYMLINK libspdk_event_vmd.so 00:03:24.232 SYMLINK libspdk_event_vhost_blk.so 00:03:24.491 CC module/event/subsystems/accel/accel.o 00:03:24.491 LIB libspdk_event_accel.a 00:03:24.749 SO libspdk_event_accel.so.6.0 00:03:24.749 SYMLINK libspdk_event_accel.so 00:03:25.007 CC module/event/subsystems/bdev/bdev.o 00:03:25.007 LIB libspdk_event_bdev.a 00:03:25.007 SO libspdk_event_bdev.so.6.0 00:03:25.007 SYMLINK libspdk_event_bdev.so 00:03:25.264 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.265 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.265 CC module/event/subsystems/nbd/nbd.o 00:03:25.265 CC module/event/subsystems/ublk/ublk.o 00:03:25.265 CC module/event/subsystems/scsi/scsi.o 00:03:25.265 LIB libspdk_event_scsi.a 00:03:25.265 LIB libspdk_event_nbd.a 00:03:25.265 LIB libspdk_event_ublk.a 00:03:25.522 SO libspdk_event_scsi.so.6.0 00:03:25.522 SO libspdk_event_nbd.so.6.0 00:03:25.522 SO libspdk_event_ublk.so.3.0 00:03:25.522 SYMLINK libspdk_event_ublk.so 00:03:25.522 SYMLINK libspdk_event_nbd.so 00:03:25.522 SYMLINK libspdk_event_scsi.so 00:03:25.522 LIB libspdk_event_nvmf.a 00:03:25.522 SO libspdk_event_nvmf.so.6.0 00:03:25.522 SYMLINK libspdk_event_nvmf.so 00:03:25.522 CC module/event/subsystems/iscsi/iscsi.o 00:03:25.522 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:25.779 LIB libspdk_event_vhost_scsi.a 00:03:25.779 LIB libspdk_event_iscsi.a 00:03:25.779 SO libspdk_event_vhost_scsi.so.3.0 00:03:25.779 SO libspdk_event_iscsi.so.6.0 00:03:25.779 SYMLINK libspdk_event_vhost_scsi.so 00:03:25.779 SYMLINK libspdk_event_iscsi.so 00:03:26.037 SO libspdk.so.6.0 00:03:26.037 SYMLINK libspdk.so 00:03:26.037 CC app/spdk_lspci/spdk_lspci.o 00:03:26.037 CXX app/trace/trace.o 00:03:26.037 CC app/trace_record/trace_record.o 00:03:26.294 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:26.294 CC app/nvmf_tgt/nvmf_main.o 00:03:26.294 CC app/iscsi_tgt/iscsi_tgt.o 00:03:26.294 CC examples/util/zipf/zipf.o 00:03:26.294 CC examples/ioat/perf/perf.o 00:03:26.294 CC test/thread/poller_perf/poller_perf.o 00:03:26.294 CC app/spdk_tgt/spdk_tgt.o 00:03:26.294 LINK spdk_lspci 00:03:26.294 LINK interrupt_tgt 00:03:26.294 LINK nvmf_tgt 00:03:26.294 LINK zipf 00:03:26.294 LINK poller_perf 00:03:26.294 LINK iscsi_tgt 00:03:26.294 LINK spdk_trace_record 00:03:26.294 LINK ioat_perf 00:03:26.551 LINK spdk_tgt 00:03:26.551 CC app/spdk_nvme_perf/perf.o 00:03:26.551 LINK spdk_trace 00:03:26.551 CC examples/ioat/verify/verify.o 00:03:26.551 TEST_HEADER include/spdk/accel.h 00:03:26.551 TEST_HEADER include/spdk/accel_module.h 00:03:26.551 TEST_HEADER include/spdk/assert.h 00:03:26.551 TEST_HEADER include/spdk/barrier.h 00:03:26.551 TEST_HEADER include/spdk/base64.h 00:03:26.551 TEST_HEADER include/spdk/bdev.h 00:03:26.551 TEST_HEADER include/spdk/bdev_module.h 00:03:26.551 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.551 TEST_HEADER include/spdk/bit_array.h 00:03:26.551 TEST_HEADER include/spdk/bit_pool.h 00:03:26.551 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.551 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.551 TEST_HEADER include/spdk/blobfs.h 00:03:26.551 CC app/spdk_nvme_identify/identify.o 00:03:26.551 TEST_HEADER include/spdk/blob.h 00:03:26.551 TEST_HEADER include/spdk/conf.h 00:03:26.551 TEST_HEADER include/spdk/config.h 00:03:26.551 TEST_HEADER include/spdk/cpuset.h 00:03:26.551 TEST_HEADER include/spdk/crc16.h 00:03:26.551 TEST_HEADER include/spdk/crc32.h 00:03:26.551 TEST_HEADER include/spdk/crc64.h 00:03:26.551 TEST_HEADER include/spdk/dif.h 00:03:26.551 CC app/spdk_nvme_discover/discovery_aer.o 00:03:26.551 TEST_HEADER include/spdk/dma.h 00:03:26.551 TEST_HEADER include/spdk/endian.h 00:03:26.551 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.551 TEST_HEADER include/spdk/env.h 00:03:26.551 TEST_HEADER include/spdk/event.h 00:03:26.551 TEST_HEADER include/spdk/fd_group.h 00:03:26.552 TEST_HEADER include/spdk/fd.h 00:03:26.552 TEST_HEADER include/spdk/file.h 00:03:26.552 TEST_HEADER include/spdk/fsdev.h 00:03:26.552 TEST_HEADER include/spdk/fsdev_module.h 00:03:26.552 TEST_HEADER include/spdk/ftl.h 00:03:26.552 CC app/spdk_top/spdk_top.o 00:03:26.552 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:26.552 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.552 TEST_HEADER include/spdk/hexlify.h 00:03:26.552 TEST_HEADER include/spdk/histogram_data.h 00:03:26.552 TEST_HEADER include/spdk/idxd.h 00:03:26.552 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.552 TEST_HEADER include/spdk/init.h 00:03:26.552 CC test/dma/test_dma/test_dma.o 00:03:26.552 TEST_HEADER include/spdk/ioat.h 00:03:26.552 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.552 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.552 TEST_HEADER include/spdk/json.h 00:03:26.552 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.552 TEST_HEADER include/spdk/keyring.h 00:03:26.552 TEST_HEADER include/spdk/keyring_module.h 00:03:26.552 TEST_HEADER include/spdk/likely.h 00:03:26.552 TEST_HEADER include/spdk/log.h 00:03:26.552 TEST_HEADER include/spdk/lvol.h 00:03:26.552 TEST_HEADER include/spdk/md5.h 00:03:26.552 TEST_HEADER include/spdk/memory.h 00:03:26.552 TEST_HEADER include/spdk/mmio.h 00:03:26.552 TEST_HEADER include/spdk/nbd.h 00:03:26.552 TEST_HEADER include/spdk/net.h 00:03:26.552 TEST_HEADER include/spdk/notify.h 00:03:26.552 TEST_HEADER include/spdk/nvme.h 00:03:26.552 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.552 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.552 CC test/app/bdev_svc/bdev_svc.o 00:03:26.552 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.552 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.552 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.552 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.552 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.552 TEST_HEADER include/spdk/nvmf.h 00:03:26.552 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.552 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.552 TEST_HEADER include/spdk/opal.h 00:03:26.552 TEST_HEADER include/spdk/opal_spec.h 00:03:26.809 TEST_HEADER include/spdk/pci_ids.h 00:03:26.809 TEST_HEADER include/spdk/pipe.h 00:03:26.809 TEST_HEADER include/spdk/queue.h 00:03:26.809 TEST_HEADER include/spdk/reduce.h 00:03:26.809 TEST_HEADER include/spdk/rpc.h 00:03:26.809 TEST_HEADER include/spdk/scheduler.h 00:03:26.809 TEST_HEADER include/spdk/scsi.h 00:03:26.809 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.809 TEST_HEADER include/spdk/sock.h 00:03:26.809 TEST_HEADER include/spdk/stdinc.h 00:03:26.809 TEST_HEADER include/spdk/string.h 00:03:26.809 CC app/spdk_dd/spdk_dd.o 00:03:26.809 TEST_HEADER include/spdk/thread.h 00:03:26.809 TEST_HEADER include/spdk/trace.h 00:03:26.809 TEST_HEADER include/spdk/trace_parser.h 00:03:26.809 TEST_HEADER include/spdk/tree.h 00:03:26.809 TEST_HEADER include/spdk/ublk.h 00:03:26.809 TEST_HEADER include/spdk/util.h 00:03:26.809 TEST_HEADER include/spdk/uuid.h 00:03:26.809 TEST_HEADER include/spdk/version.h 00:03:26.809 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.809 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.809 TEST_HEADER include/spdk/vhost.h 00:03:26.809 TEST_HEADER include/spdk/vmd.h 00:03:26.809 TEST_HEADER include/spdk/xor.h 00:03:26.809 TEST_HEADER include/spdk/zipf.h 00:03:26.809 CXX test/cpp_headers/accel.o 00:03:26.809 CC app/fio/nvme/fio_plugin.o 00:03:26.809 LINK verify 00:03:26.809 LINK spdk_nvme_discover 00:03:26.809 LINK bdev_svc 00:03:26.809 CXX test/cpp_headers/accel_module.o 00:03:26.809 CXX test/cpp_headers/assert.o 00:03:27.067 LINK spdk_dd 00:03:27.067 CXX test/cpp_headers/barrier.o 00:03:27.067 CC examples/thread/thread/thread_ex.o 00:03:27.067 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.067 LINK test_dma 00:03:27.067 CXX test/cpp_headers/base64.o 00:03:27.324 CC examples/sock/hello_world/hello_sock.o 00:03:27.324 LINK spdk_nvme_perf 00:03:27.324 CXX test/cpp_headers/bdev.o 00:03:27.324 LINK spdk_nvme 00:03:27.324 LINK thread 00:03:27.324 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.324 LINK spdk_nvme_identify 00:03:27.324 CC examples/vmd/led/led.o 00:03:27.324 LINK nvme_fuzz 00:03:27.324 CC examples/idxd/perf/perf.o 00:03:27.324 CXX test/cpp_headers/bdev_module.o 00:03:27.581 LINK hello_sock 00:03:27.581 LINK lsvmd 00:03:27.581 CC app/fio/bdev/fio_plugin.o 00:03:27.581 LINK led 00:03:27.581 LINK spdk_top 00:03:27.581 CC app/vhost/vhost.o 00:03:27.581 CXX test/cpp_headers/bdev_zone.o 00:03:27.581 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:27.581 CC examples/accel/perf/accel_perf.o 00:03:27.838 CXX test/cpp_headers/bit_array.o 00:03:27.838 LINK vhost 00:03:27.838 CC examples/blob/hello_world/hello_blob.o 00:03:27.838 LINK idxd_perf 00:03:27.838 CC examples/blob/cli/blobcli.o 00:03:27.838 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:27.838 CXX test/cpp_headers/bit_pool.o 00:03:27.838 CC test/env/mem_callbacks/mem_callbacks.o 00:03:27.838 CXX test/cpp_headers/blob_bdev.o 00:03:27.838 LINK hello_blob 00:03:27.838 LINK spdk_bdev 00:03:27.838 CC test/env/vtophys/vtophys.o 00:03:28.095 LINK hello_fsdev 00:03:28.095 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.095 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.095 LINK vtophys 00:03:28.095 CC test/env/memory/memory_ut.o 00:03:28.095 CC test/env/pci/pci_ut.o 00:03:28.095 LINK accel_perf 00:03:28.095 LINK env_dpdk_post_init 00:03:28.352 LINK blobcli 00:03:28.352 CXX test/cpp_headers/blobfs.o 00:03:28.352 CC test/app/histogram_perf/histogram_perf.o 00:03:28.352 CC examples/nvme/hello_world/hello_world.o 00:03:28.352 LINK mem_callbacks 00:03:28.352 CXX test/cpp_headers/blob.o 00:03:28.352 CC test/app/jsoncat/jsoncat.o 00:03:28.352 CXX test/cpp_headers/conf.o 00:03:28.352 CC examples/nvme/reconnect/reconnect.o 00:03:28.352 LINK histogram_perf 00:03:28.651 CXX test/cpp_headers/config.o 00:03:28.651 LINK hello_world 00:03:28.651 LINK jsoncat 00:03:28.651 CXX test/cpp_headers/cpuset.o 00:03:28.651 LINK pci_ut 00:03:28.651 CXX test/cpp_headers/crc16.o 00:03:28.651 CC test/app/stub/stub.o 00:03:28.651 CXX test/cpp_headers/crc32.o 00:03:28.651 CC test/event/event_perf/event_perf.o 00:03:28.651 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.651 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.651 LINK reconnect 00:03:28.909 CXX test/cpp_headers/crc64.o 00:03:28.909 LINK stub 00:03:28.909 LINK event_perf 00:03:28.909 CC test/rpc_client/rpc_client_test.o 00:03:28.909 CC test/nvme/aer/aer.o 00:03:28.909 CXX test/cpp_headers/dif.o 00:03:28.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:28.909 CC test/event/reactor/reactor.o 00:03:28.909 CC test/accel/dif/dif.o 00:03:28.909 CXX test/cpp_headers/dma.o 00:03:28.909 LINK iscsi_fuzz 00:03:28.909 LINK rpc_client_test 00:03:29.167 LINK aer 00:03:29.167 CC test/blobfs/mkfs/mkfs.o 00:03:29.167 LINK reactor 00:03:29.167 LINK vhost_fuzz 00:03:29.167 CXX test/cpp_headers/endian.o 00:03:29.167 LINK memory_ut 00:03:29.167 CXX test/cpp_headers/env_dpdk.o 00:03:29.167 CC test/event/reactor_perf/reactor_perf.o 00:03:29.167 CC test/nvme/reset/reset.o 00:03:29.167 LINK mkfs 00:03:29.167 CC test/event/app_repeat/app_repeat.o 00:03:29.424 CC test/lvol/esnap/esnap.o 00:03:29.424 CXX test/cpp_headers/env.o 00:03:29.424 CC test/event/scheduler/scheduler.o 00:03:29.424 LINK reactor_perf 00:03:29.424 CXX test/cpp_headers/event.o 00:03:29.424 LINK nvme_manage 00:03:29.424 CXX test/cpp_headers/fd_group.o 00:03:29.424 LINK app_repeat 00:03:29.424 LINK reset 00:03:29.424 LINK dif 00:03:29.424 CXX test/cpp_headers/fd.o 00:03:29.424 CXX test/cpp_headers/file.o 00:03:29.424 CC examples/nvme/arbitration/arbitration.o 00:03:29.682 CC examples/nvme/hotplug/hotplug.o 00:03:29.682 CXX test/cpp_headers/fsdev.o 00:03:29.682 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:29.682 LINK scheduler 00:03:29.682 CC test/nvme/sgl/sgl.o 00:03:29.682 CXX test/cpp_headers/fsdev_module.o 00:03:29.682 CC test/nvme/e2edp/nvme_dp.o 00:03:29.682 CC test/nvme/overhead/overhead.o 00:03:29.682 CC test/nvme/err_injection/err_injection.o 00:03:29.682 LINK cmb_copy 00:03:29.682 LINK hotplug 00:03:29.940 CC test/nvme/startup/startup.o 00:03:29.940 CXX test/cpp_headers/ftl.o 00:03:29.940 LINK arbitration 00:03:29.940 LINK sgl 00:03:29.940 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.940 LINK nvme_dp 00:03:29.940 LINK err_injection 00:03:29.940 CXX test/cpp_headers/gpt_spec.o 00:03:29.940 LINK startup 00:03:29.940 CXX test/cpp_headers/hexlify.o 00:03:29.940 LINK overhead 00:03:29.940 CXX test/cpp_headers/histogram_data.o 00:03:30.196 CC examples/nvme/abort/abort.o 00:03:30.196 CC test/nvme/reserve/reserve.o 00:03:30.196 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.196 CC test/nvme/simple_copy/simple_copy.o 00:03:30.196 CC test/bdev/bdevio/bdevio.o 00:03:30.196 CC test/nvme/connect_stress/connect_stress.o 00:03:30.196 CC test/nvme/boot_partition/boot_partition.o 00:03:30.196 CXX test/cpp_headers/idxd.o 00:03:30.196 LINK pmr_persistence 00:03:30.196 LINK reserve 00:03:30.196 CC test/nvme/compliance/nvme_compliance.o 00:03:30.453 LINK simple_copy 00:03:30.453 LINK boot_partition 00:03:30.453 CXX test/cpp_headers/idxd_spec.o 00:03:30.453 LINK connect_stress 00:03:30.453 LINK abort 00:03:30.453 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.453 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.453 CC test/nvme/fdp/fdp.o 00:03:30.453 CXX test/cpp_headers/init.o 00:03:30.453 CXX test/cpp_headers/ioat.o 00:03:30.710 LINK bdevio 00:03:30.710 LINK nvme_compliance 00:03:30.710 CC test/nvme/cuse/cuse.o 00:03:30.710 LINK fused_ordering 00:03:30.710 LINK doorbell_aers 00:03:30.710 CXX test/cpp_headers/ioat_spec.o 00:03:30.710 CXX test/cpp_headers/iscsi_spec.o 00:03:30.710 CXX test/cpp_headers/json.o 00:03:30.710 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.710 CXX test/cpp_headers/jsonrpc.o 00:03:30.710 CC examples/bdev/bdevperf/bdevperf.o 00:03:30.710 CXX test/cpp_headers/keyring.o 00:03:30.710 LINK fdp 00:03:30.710 CXX test/cpp_headers/keyring_module.o 00:03:30.967 CXX test/cpp_headers/likely.o 00:03:30.967 CXX test/cpp_headers/log.o 00:03:30.967 CXX test/cpp_headers/lvol.o 00:03:30.967 CXX test/cpp_headers/md5.o 00:03:30.967 CXX test/cpp_headers/memory.o 00:03:30.967 LINK hello_bdev 00:03:30.967 CXX test/cpp_headers/mmio.o 00:03:30.967 CXX test/cpp_headers/nbd.o 00:03:30.967 CXX test/cpp_headers/net.o 00:03:30.967 CXX test/cpp_headers/notify.o 00:03:30.967 CXX test/cpp_headers/nvme.o 00:03:30.967 CXX test/cpp_headers/nvme_intel.o 00:03:30.967 CXX test/cpp_headers/nvme_ocssd.o 00:03:30.967 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.224 CXX test/cpp_headers/nvme_spec.o 00:03:31.224 CXX test/cpp_headers/nvme_zns.o 00:03:31.224 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.224 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.224 CXX test/cpp_headers/nvmf.o 00:03:31.224 CXX test/cpp_headers/nvmf_spec.o 00:03:31.224 CXX test/cpp_headers/nvmf_transport.o 00:03:31.224 CXX test/cpp_headers/opal.o 00:03:31.224 CXX test/cpp_headers/opal_spec.o 00:03:31.224 CXX test/cpp_headers/pci_ids.o 00:03:31.224 CXX test/cpp_headers/pipe.o 00:03:31.224 CXX test/cpp_headers/queue.o 00:03:31.224 CXX test/cpp_headers/reduce.o 00:03:31.224 CXX test/cpp_headers/rpc.o 00:03:31.481 CXX test/cpp_headers/scheduler.o 00:03:31.481 CXX test/cpp_headers/scsi.o 00:03:31.481 CXX test/cpp_headers/scsi_spec.o 00:03:31.481 CXX test/cpp_headers/sock.o 00:03:31.481 CXX test/cpp_headers/stdinc.o 00:03:31.481 CXX test/cpp_headers/string.o 00:03:31.481 CXX test/cpp_headers/thread.o 00:03:31.481 CXX test/cpp_headers/trace.o 00:03:31.481 CXX test/cpp_headers/trace_parser.o 00:03:31.481 CXX test/cpp_headers/tree.o 00:03:31.481 CXX test/cpp_headers/ublk.o 00:03:31.481 CXX test/cpp_headers/util.o 00:03:31.481 CXX test/cpp_headers/uuid.o 00:03:31.481 LINK bdevperf 00:03:31.481 CXX test/cpp_headers/version.o 00:03:31.481 CXX test/cpp_headers/vfio_user_pci.o 00:03:31.481 CXX test/cpp_headers/vfio_user_spec.o 00:03:31.739 CXX test/cpp_headers/vhost.o 00:03:31.739 CXX test/cpp_headers/vmd.o 00:03:31.739 CXX test/cpp_headers/xor.o 00:03:31.739 CXX test/cpp_headers/zipf.o 00:03:31.739 LINK cuse 00:03:31.996 CC examples/nvmf/nvmf/nvmf.o 00:03:32.253 LINK nvmf 00:03:33.629 LINK esnap 00:03:33.888 00:03:33.888 real 1m4.813s 00:03:33.888 user 6m4.247s 00:03:33.888 sys 1m8.873s 00:03:33.888 19:53:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.888 19:53:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.888 ************************************ 00:03:33.888 END TEST make 00:03:33.888 ************************************ 00:03:34.146 19:53:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.146 19:53:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.146 19:53:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.146 19:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.146 19:53:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.146 19:53:07 -- pm/common@44 -- $ pid=5079 00:03:34.146 19:53:07 -- pm/common@50 -- $ kill -TERM 5079 00:03:34.146 19:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.146 19:53:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.146 19:53:07 -- pm/common@44 -- $ pid=5080 00:03:34.146 19:53:07 -- pm/common@50 -- $ kill -TERM 5080 00:03:34.146 19:53:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:34.146 19:53:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:34.146 19:53:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.146 19:53:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.146 19:53:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.146 19:53:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.146 19:53:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.146 19:53:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.146 19:53:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.146 19:53:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.146 19:53:07 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.146 19:53:07 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.146 19:53:07 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.146 19:53:07 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.146 19:53:07 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.146 19:53:07 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.146 19:53:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.146 19:53:07 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.146 19:53:07 -- scripts/common.sh@345 -- # : 1 00:03:34.146 19:53:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.146 19:53:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.146 19:53:07 -- scripts/common.sh@365 -- # decimal 1 00:03:34.146 19:53:07 -- scripts/common.sh@353 -- # local d=1 00:03:34.146 19:53:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.146 19:53:07 -- scripts/common.sh@355 -- # echo 1 00:03:34.146 19:53:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.146 19:53:07 -- scripts/common.sh@366 -- # decimal 2 00:03:34.146 19:53:07 -- scripts/common.sh@353 -- # local d=2 00:03:34.147 19:53:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.147 19:53:07 -- scripts/common.sh@355 -- # echo 2 00:03:34.147 19:53:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.147 19:53:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.147 19:53:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.147 19:53:07 -- scripts/common.sh@368 -- # return 0 00:03:34.147 19:53:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.147 19:53:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.147 --rc genhtml_branch_coverage=1 00:03:34.147 --rc genhtml_function_coverage=1 00:03:34.147 --rc genhtml_legend=1 00:03:34.147 --rc geninfo_all_blocks=1 00:03:34.147 --rc geninfo_unexecuted_blocks=1 00:03:34.147 00:03:34.147 ' 00:03:34.147 19:53:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.147 --rc genhtml_branch_coverage=1 00:03:34.147 --rc genhtml_function_coverage=1 00:03:34.147 --rc genhtml_legend=1 00:03:34.147 --rc geninfo_all_blocks=1 00:03:34.147 --rc geninfo_unexecuted_blocks=1 00:03:34.147 00:03:34.147 ' 00:03:34.147 19:53:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.147 --rc genhtml_branch_coverage=1 00:03:34.147 --rc genhtml_function_coverage=1 00:03:34.147 --rc genhtml_legend=1 00:03:34.147 --rc geninfo_all_blocks=1 00:03:34.147 --rc geninfo_unexecuted_blocks=1 00:03:34.147 00:03:34.147 ' 00:03:34.147 19:53:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.147 --rc genhtml_branch_coverage=1 00:03:34.147 --rc genhtml_function_coverage=1 00:03:34.147 --rc genhtml_legend=1 00:03:34.147 --rc geninfo_all_blocks=1 00:03:34.147 --rc geninfo_unexecuted_blocks=1 00:03:34.147 00:03:34.147 ' 00:03:34.147 19:53:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.147 19:53:07 -- nvmf/common.sh@7 -- # uname -s 00:03:34.147 19:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.147 19:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.147 19:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.147 19:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.147 19:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.147 19:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.147 19:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.147 19:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.147 19:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.147 19:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.147 19:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11db6e10-0359-4a07-932f-9b365e9860cf 00:03:34.147 19:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=11db6e10-0359-4a07-932f-9b365e9860cf 00:03:34.147 19:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.147 19:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.147 19:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:34.147 19:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.147 19:53:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.147 19:53:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.147 19:53:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.147 19:53:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.147 19:53:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.147 19:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.147 19:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.147 19:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.147 19:53:07 -- paths/export.sh@5 -- # export PATH 00:03:34.147 19:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.147 19:53:07 -- nvmf/common.sh@51 -- # : 0 00:03:34.147 19:53:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.147 19:53:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.147 19:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.147 19:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.147 19:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.147 19:53:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.147 19:53:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.147 19:53:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.147 19:53:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.147 19:53:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.147 19:53:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.147 19:53:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.147 19:53:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.147 19:53:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.147 19:53:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.147 19:53:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.147 19:53:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.405 19:53:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.405 19:53:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.405 19:53:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54217 00:03:34.405 19:53:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.405 19:53:07 -- pm/common@17 -- # local monitor 00:03:34.405 19:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.405 19:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.405 19:53:07 -- pm/common@25 -- # sleep 1 00:03:34.405 19:53:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.405 19:53:07 -- pm/common@21 -- # date +%s 00:03:34.405 19:53:07 -- pm/common@21 -- # date +%s 00:03:34.405 19:53:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732045987 00:03:34.405 19:53:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732045987 00:03:34.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732045987_collect-cpu-load.pm.log 00:03:34.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732045987_collect-vmstat.pm.log 00:03:35.343 19:53:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.343 19:53:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.343 19:53:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.343 19:53:08 -- common/autotest_common.sh@10 -- # set +x 00:03:35.343 19:53:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.343 19:53:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.343 19:53:08 -- common/autotest_common.sh@10 -- # set +x 00:03:35.343 19:53:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.343 19:53:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.343 19:53:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.343 19:53:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.343 19:53:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.343 19:53:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.343 19:53:09 -- common/autotest_common.sh@1457 -- # uname 00:03:35.343 19:53:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.343 19:53:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.343 19:53:09 -- common/autotest_common.sh@1477 -- # uname 00:03:35.343 19:53:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.343 19:53:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.343 19:53:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.343 lcov: LCOV version 1.15 00:03:35.343 19:53:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.338 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.338 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:05.267 19:53:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:05.267 19:53:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.267 19:53:37 -- common/autotest_common.sh@10 -- # set +x 00:04:05.267 19:53:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:05.267 19:53:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.267 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:05.267 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:05.267 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:05.267 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:05.267 19:53:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:05.267 19:53:38 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:05.267 19:53:38 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:05.267 19:53:38 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.267 19:53:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:05.267 19:53:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:05.267 19:53:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.267 19:53:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:05.267 19:53:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.267 19:53:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.267 19:53:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:05.267 19:53:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:05.267 19:53:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.267 No valid GPT data, bailing 00:04:05.267 19:53:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.267 19:53:38 -- scripts/common.sh@394 -- # pt= 00:04:05.267 19:53:38 -- scripts/common.sh@395 -- # return 1 00:04:05.267 19:53:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.267 1+0 records in 00:04:05.267 1+0 records out 00:04:05.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108723 s, 96.4 MB/s 00:04:05.267 19:53:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.267 19:53:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.267 19:53:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:05.267 19:53:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:05.267 19:53:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:05.267 No valid GPT data, bailing 00:04:05.267 19:53:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.267 19:53:38 -- scripts/common.sh@394 -- # pt= 00:04:05.267 19:53:38 -- scripts/common.sh@395 -- # return 1 00:04:05.267 19:53:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:05.267 1+0 records in 00:04:05.267 1+0 records out 00:04:05.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426498 s, 246 MB/s 00:04:05.267 19:53:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.267 19:53:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.267 19:53:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:05.267 19:53:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:05.267 19:53:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:05.268 No valid GPT data, bailing 00:04:05.268 19:53:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:05.268 19:53:39 -- scripts/common.sh@394 -- # pt= 00:04:05.268 19:53:39 -- scripts/common.sh@395 -- # return 1 00:04:05.268 19:53:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:05.268 1+0 records in 00:04:05.268 1+0 records out 00:04:05.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361408 s, 290 MB/s 00:04:05.268 19:53:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.527 19:53:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.527 19:53:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:05.527 19:53:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:05.527 19:53:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:05.527 No valid GPT data, bailing 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # pt= 00:04:05.527 19:53:39 -- scripts/common.sh@395 -- # return 1 00:04:05.527 19:53:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:05.527 1+0 records in 00:04:05.527 1+0 records out 00:04:05.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447148 s, 235 MB/s 00:04:05.527 19:53:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.527 19:53:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.527 19:53:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:05.527 19:53:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:05.527 19:53:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:05.527 No valid GPT data, bailing 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # pt= 00:04:05.527 19:53:39 -- scripts/common.sh@395 -- # return 1 00:04:05.527 19:53:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:05.527 1+0 records in 00:04:05.527 1+0 records out 00:04:05.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040694 s, 258 MB/s 00:04:05.527 19:53:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.527 19:53:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.527 19:53:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:05.527 19:53:39 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:05.527 19:53:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:05.527 No valid GPT data, bailing 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:05.527 19:53:39 -- scripts/common.sh@394 -- # pt= 00:04:05.527 19:53:39 -- scripts/common.sh@395 -- # return 1 00:04:05.527 19:53:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:05.527 1+0 records in 00:04:05.527 1+0 records out 00:04:05.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384551 s, 273 MB/s 00:04:05.527 19:53:39 -- spdk/autotest.sh@105 -- # sync 00:04:05.527 19:53:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:05.527 19:53:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:05.527 19:53:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.433 19:53:40 -- spdk/autotest.sh@111 -- # uname -s 00:04:07.433 19:53:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:07.433 19:53:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:07.433 19:53:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:07.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.998 Hugepages 00:04:07.998 node hugesize free / total 00:04:07.998 node0 1048576kB 0 / 0 00:04:07.998 node0 2048kB 0 / 0 00:04:07.998 00:04:07.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.998 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:07.998 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:07.998 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:07.998 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:08.255 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:08.255 19:53:41 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.255 19:53:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.256 19:53:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.256 19:53:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.080 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.081 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.081 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.081 19:53:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:10.015 19:53:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:10.015 19:53:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:10.015 19:53:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.015 19:53:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:10.015 19:53:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.015 19:53:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.015 19:53:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.015 19:53:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.015 19:53:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.274 19:53:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:10.274 19:53:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:10.274 19:53:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.532 Waiting for block devices as requested 00:04:10.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.791 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.791 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.068 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:16.068 19:53:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.068 19:53:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1543 -- # continue 00:04:16.068 19:53:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.068 19:53:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1543 -- # continue 00:04:16.068 19:53:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.068 19:53:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1543 -- # continue 00:04:16.068 19:53:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.068 19:53:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.068 19:53:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.068 19:53:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:16.068 19:53:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.069 19:53:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.069 19:53:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.069 19:53:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.069 19:53:49 -- common/autotest_common.sh@1543 -- # continue 00:04:16.069 19:53:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:16.069 19:53:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.069 19:53:49 -- common/autotest_common.sh@10 -- # set +x 00:04:16.069 19:53:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:16.069 19:53:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.069 19:53:49 -- common/autotest_common.sh@10 -- # set +x 00:04:16.069 19:53:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.893 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.893 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.893 19:53:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:16.893 19:53:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.893 19:53:50 -- common/autotest_common.sh@10 -- # set +x 00:04:16.893 19:53:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:16.893 19:53:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:16.893 19:53:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:16.893 19:53:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:16.893 19:53:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:16.893 19:53:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:16.893 19:53:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:16.893 19:53:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:16.893 19:53:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.893 19:53:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.893 19:53:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.893 19:53:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:16.893 19:53:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:17.153 19:53:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:17.153 19:53:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:17.153 19:53:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:17.153 19:53:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.153 19:53:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:17.153 19:53:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.153 19:53:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:17.153 19:53:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.153 19:53:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:17.153 19:53:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:17.153 19:53:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.153 19:53:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:17.153 19:53:50 -- common/autotest_common.sh@1572 -- # return 0 00:04:17.153 19:53:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:17.153 19:53:50 -- common/autotest_common.sh@1580 -- # return 0 00:04:17.153 19:53:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:17.153 19:53:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:17.153 19:53:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:17.153 19:53:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:17.153 19:53:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:17.153 19:53:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.153 19:53:50 -- common/autotest_common.sh@10 -- # set +x 00:04:17.153 19:53:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:17.153 19:53:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.153 19:53:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.153 19:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.153 19:53:50 -- common/autotest_common.sh@10 -- # set +x 00:04:17.153 ************************************ 00:04:17.153 START TEST env 00:04:17.153 ************************************ 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.153 * Looking for test storage... 00:04:17.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.153 19:53:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.153 19:53:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.153 19:53:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.153 19:53:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.153 19:53:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.153 19:53:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.153 19:53:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.153 19:53:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.153 19:53:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.153 19:53:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.153 19:53:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.153 19:53:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:17.153 19:53:50 env -- scripts/common.sh@345 -- # : 1 00:04:17.153 19:53:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.153 19:53:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.153 19:53:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:17.153 19:53:50 env -- scripts/common.sh@353 -- # local d=1 00:04:17.153 19:53:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.153 19:53:50 env -- scripts/common.sh@355 -- # echo 1 00:04:17.153 19:53:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.153 19:53:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:17.153 19:53:50 env -- scripts/common.sh@353 -- # local d=2 00:04:17.153 19:53:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.153 19:53:50 env -- scripts/common.sh@355 -- # echo 2 00:04:17.153 19:53:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.153 19:53:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.153 19:53:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.153 19:53:50 env -- scripts/common.sh@368 -- # return 0 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.153 --rc genhtml_branch_coverage=1 00:04:17.153 --rc genhtml_function_coverage=1 00:04:17.153 --rc genhtml_legend=1 00:04:17.153 --rc geninfo_all_blocks=1 00:04:17.153 --rc geninfo_unexecuted_blocks=1 00:04:17.153 00:04:17.153 ' 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.153 --rc genhtml_branch_coverage=1 00:04:17.153 --rc genhtml_function_coverage=1 00:04:17.153 --rc genhtml_legend=1 00:04:17.153 --rc geninfo_all_blocks=1 00:04:17.153 --rc geninfo_unexecuted_blocks=1 00:04:17.153 00:04:17.153 ' 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.153 --rc genhtml_branch_coverage=1 00:04:17.153 --rc genhtml_function_coverage=1 00:04:17.153 --rc genhtml_legend=1 00:04:17.153 --rc geninfo_all_blocks=1 00:04:17.153 --rc geninfo_unexecuted_blocks=1 00:04:17.153 00:04:17.153 ' 00:04:17.153 19:53:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.153 --rc genhtml_branch_coverage=1 00:04:17.153 --rc genhtml_function_coverage=1 00:04:17.154 --rc genhtml_legend=1 00:04:17.154 --rc geninfo_all_blocks=1 00:04:17.154 --rc geninfo_unexecuted_blocks=1 00:04:17.154 00:04:17.154 ' 00:04:17.154 19:53:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:17.154 19:53:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.154 19:53:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.154 19:53:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.154 ************************************ 00:04:17.154 START TEST env_memory 00:04:17.154 ************************************ 00:04:17.154 19:53:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:17.154 00:04:17.154 00:04:17.154 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.154 http://cunit.sourceforge.net/ 00:04:17.154 00:04:17.154 00:04:17.154 Suite: memory 00:04:17.154 Test: alloc and free memory map ...[2024-11-19 19:53:50.915590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:17.154 passed 00:04:17.412 Test: mem map translation ...[2024-11-19 19:53:50.954233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:17.412 [2024-11-19 19:53:50.954283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:17.412 [2024-11-19 19:53:50.954341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:17.412 [2024-11-19 19:53:50.954357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:17.412 passed 00:04:17.412 Test: mem map registration ...[2024-11-19 19:53:51.022362] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:17.412 [2024-11-19 19:53:51.022432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:17.412 passed 00:04:17.412 Test: mem map adjacent registrations ...passed 00:04:17.412 00:04:17.412 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.412 suites 1 1 n/a 0 0 00:04:17.412 tests 4 4 4 0 0 00:04:17.412 asserts 152 152 152 0 n/a 00:04:17.412 00:04:17.412 Elapsed time = 0.233 seconds 00:04:17.412 00:04:17.412 real 0m0.269s 00:04:17.412 user 0m0.241s 00:04:17.412 sys 0m0.021s 00:04:17.412 19:53:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.412 19:53:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:17.412 ************************************ 00:04:17.412 END TEST env_memory 00:04:17.412 ************************************ 00:04:17.412 19:53:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.412 19:53:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.412 19:53:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.412 19:53:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.412 ************************************ 00:04:17.412 START TEST env_vtophys 00:04:17.412 ************************************ 00:04:17.412 19:53:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.412 EAL: lib.eal log level changed from notice to debug 00:04:17.412 EAL: Detected lcore 0 as core 0 on socket 0 00:04:17.412 EAL: Detected lcore 1 as core 0 on socket 0 00:04:17.412 EAL: Detected lcore 2 as core 0 on socket 0 00:04:17.412 EAL: Detected lcore 3 as core 0 on socket 0 00:04:17.412 EAL: Detected lcore 4 as core 0 on socket 0 00:04:17.413 EAL: Detected lcore 5 as core 0 on socket 0 00:04:17.413 EAL: Detected lcore 6 as core 0 on socket 0 00:04:17.413 EAL: Detected lcore 7 as core 0 on socket 0 00:04:17.413 EAL: Detected lcore 8 as core 0 on socket 0 00:04:17.413 EAL: Detected lcore 9 as core 0 on socket 0 00:04:17.413 EAL: Maximum logical cores by configuration: 128 00:04:17.413 EAL: Detected CPU lcores: 10 00:04:17.413 EAL: Detected NUMA nodes: 1 00:04:17.413 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:17.413 EAL: Detected shared linkage of DPDK 00:04:17.671 EAL: No shared files mode enabled, IPC will be disabled 00:04:17.671 EAL: Selected IOVA mode 'PA' 00:04:17.671 EAL: Probing VFIO support... 00:04:17.671 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:17.671 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:17.671 EAL: Ask a virtual area of 0x2e000 bytes 00:04:17.671 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:17.671 EAL: Setting up physically contiguous memory... 00:04:17.671 EAL: Setting maximum number of open files to 524288 00:04:17.671 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:17.671 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:17.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.671 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:17.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.671 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:17.671 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:17.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.671 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:17.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.671 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:17.671 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:17.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.671 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:17.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.671 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:17.671 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:17.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.671 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:17.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.671 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:17.671 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:17.671 EAL: Hugepages will be freed exactly as allocated. 00:04:17.671 EAL: No shared files mode enabled, IPC is disabled 00:04:17.671 EAL: No shared files mode enabled, IPC is disabled 00:04:17.671 EAL: TSC frequency is ~2600000 KHz 00:04:17.671 EAL: Main lcore 0 is ready (tid=7f17e9623a40;cpuset=[0]) 00:04:17.671 EAL: Trying to obtain current memory policy. 00:04:17.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.671 EAL: Restoring previous memory policy: 0 00:04:17.671 EAL: request: mp_malloc_sync 00:04:17.671 EAL: No shared files mode enabled, IPC is disabled 00:04:17.671 EAL: Heap on socket 0 was expanded by 2MB 00:04:17.671 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:17.671 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:17.671 EAL: Mem event callback 'spdk:(nil)' registered 00:04:17.671 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:17.671 00:04:17.671 00:04:17.671 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.671 http://cunit.sourceforge.net/ 00:04:17.671 00:04:17.671 00:04:17.671 Suite: components_suite 00:04:17.930 Test: vtophys_malloc_test ...passed 00:04:17.930 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:17.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.930 EAL: Restoring previous memory policy: 4 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was expanded by 4MB 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was shrunk by 4MB 00:04:17.930 EAL: Trying to obtain current memory policy. 00:04:17.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.930 EAL: Restoring previous memory policy: 4 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was expanded by 6MB 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was shrunk by 6MB 00:04:17.930 EAL: Trying to obtain current memory policy. 00:04:17.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.930 EAL: Restoring previous memory policy: 4 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was expanded by 10MB 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was shrunk by 10MB 00:04:17.930 EAL: Trying to obtain current memory policy. 00:04:17.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.930 EAL: Restoring previous memory policy: 4 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was expanded by 18MB 00:04:17.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.930 EAL: request: mp_malloc_sync 00:04:17.930 EAL: No shared files mode enabled, IPC is disabled 00:04:17.930 EAL: Heap on socket 0 was shrunk by 18MB 00:04:18.189 EAL: Trying to obtain current memory policy. 00:04:18.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.189 EAL: Restoring previous memory policy: 4 00:04:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.189 EAL: request: mp_malloc_sync 00:04:18.189 EAL: No shared files mode enabled, IPC is disabled 00:04:18.189 EAL: Heap on socket 0 was expanded by 34MB 00:04:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.189 EAL: request: mp_malloc_sync 00:04:18.189 EAL: No shared files mode enabled, IPC is disabled 00:04:18.189 EAL: Heap on socket 0 was shrunk by 34MB 00:04:18.189 EAL: Trying to obtain current memory policy. 00:04:18.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.189 EAL: Restoring previous memory policy: 4 00:04:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.189 EAL: request: mp_malloc_sync 00:04:18.189 EAL: No shared files mode enabled, IPC is disabled 00:04:18.189 EAL: Heap on socket 0 was expanded by 66MB 00:04:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.189 EAL: request: mp_malloc_sync 00:04:18.189 EAL: No shared files mode enabled, IPC is disabled 00:04:18.189 EAL: Heap on socket 0 was shrunk by 66MB 00:04:18.189 EAL: Trying to obtain current memory policy. 00:04:18.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.189 EAL: Restoring previous memory policy: 4 00:04:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.189 EAL: request: mp_malloc_sync 00:04:18.189 EAL: No shared files mode enabled, IPC is disabled 00:04:18.189 EAL: Heap on socket 0 was expanded by 130MB 00:04:18.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.447 EAL: request: mp_malloc_sync 00:04:18.447 EAL: No shared files mode enabled, IPC is disabled 00:04:18.447 EAL: Heap on socket 0 was shrunk by 130MB 00:04:18.706 EAL: Trying to obtain current memory policy. 00:04:18.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.706 EAL: Restoring previous memory policy: 4 00:04:18.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.706 EAL: request: mp_malloc_sync 00:04:18.706 EAL: No shared files mode enabled, IPC is disabled 00:04:18.706 EAL: Heap on socket 0 was expanded by 258MB 00:04:18.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.967 EAL: request: mp_malloc_sync 00:04:18.967 EAL: No shared files mode enabled, IPC is disabled 00:04:18.967 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.225 EAL: Trying to obtain current memory policy. 00:04:19.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.225 EAL: Restoring previous memory policy: 4 00:04:19.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.225 EAL: request: mp_malloc_sync 00:04:19.225 EAL: No shared files mode enabled, IPC is disabled 00:04:19.225 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.793 EAL: request: mp_malloc_sync 00:04:19.793 EAL: No shared files mode enabled, IPC is disabled 00:04:19.793 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.364 EAL: Trying to obtain current memory policy. 00:04:20.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.624 EAL: Restoring previous memory policy: 4 00:04:20.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.624 EAL: request: mp_malloc_sync 00:04:20.624 EAL: No shared files mode enabled, IPC is disabled 00:04:20.624 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.586 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.844 EAL: request: mp_malloc_sync 00:04:21.844 EAL: No shared files mode enabled, IPC is disabled 00:04:21.844 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.778 passed 00:04:22.778 00:04:22.778 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.778 suites 1 1 n/a 0 0 00:04:22.778 tests 2 2 2 0 0 00:04:22.778 asserts 5915 5915 5915 0 n/a 00:04:22.778 00:04:22.778 Elapsed time = 4.851 seconds 00:04:22.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.778 EAL: request: mp_malloc_sync 00:04:22.778 EAL: No shared files mode enabled, IPC is disabled 00:04:22.778 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.778 EAL: No shared files mode enabled, IPC is disabled 00:04:22.778 EAL: No shared files mode enabled, IPC is disabled 00:04:22.778 EAL: No shared files mode enabled, IPC is disabled 00:04:22.778 00:04:22.778 real 0m5.119s 00:04:22.778 user 0m4.330s 00:04:22.778 sys 0m0.638s 00:04:22.778 19:53:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.778 19:53:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.778 ************************************ 00:04:22.778 END TEST env_vtophys 00:04:22.778 ************************************ 00:04:22.778 19:53:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:22.778 19:53:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.778 19:53:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.778 19:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.778 ************************************ 00:04:22.778 START TEST env_pci 00:04:22.778 ************************************ 00:04:22.778 19:53:56 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:22.778 00:04:22.778 00:04:22.778 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.778 http://cunit.sourceforge.net/ 00:04:22.778 00:04:22.778 00:04:22.778 Suite: pci 00:04:22.778 Test: pci_hook ...[2024-11-19 19:53:56.345405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56969 has claimed it 00:04:22.778 passed 00:04:22.778 00:04:22.778 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.778 suites 1 1 n/a 0 0 00:04:22.778 tests 1 1 1 0 0 00:04:22.778 asserts 25 25 25 0 n/a 00:04:22.778 00:04:22.778 Elapsed time = 0.004 secondsEAL: Cannot find device (10000:00:01.0) 00:04:22.778 EAL: Failed to attach device on primary process 00:04:22.778 00:04:22.778 00:04:22.778 real 0m0.058s 00:04:22.778 user 0m0.027s 00:04:22.778 sys 0m0.030s 00:04:22.778 19:53:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.778 19:53:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.778 ************************************ 00:04:22.778 END TEST env_pci 00:04:22.778 ************************************ 00:04:22.778 19:53:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.778 19:53:56 env -- env/env.sh@15 -- # uname 00:04:22.778 19:53:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.778 19:53:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.778 19:53:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.778 19:53:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:22.778 19:53:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.778 19:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.778 ************************************ 00:04:22.778 START TEST env_dpdk_post_init 00:04:22.778 ************************************ 00:04:22.778 19:53:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.778 EAL: Detected CPU lcores: 10 00:04:22.778 EAL: Detected NUMA nodes: 1 00:04:22.778 EAL: Detected shared linkage of DPDK 00:04:22.778 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.778 EAL: Selected IOVA mode 'PA' 00:04:23.037 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.037 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:23.037 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:23.037 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:23.037 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:23.037 Starting DPDK initialization... 00:04:23.037 Starting SPDK post initialization... 00:04:23.037 SPDK NVMe probe 00:04:23.037 Attaching to 0000:00:10.0 00:04:23.037 Attaching to 0000:00:11.0 00:04:23.037 Attaching to 0000:00:12.0 00:04:23.037 Attaching to 0000:00:13.0 00:04:23.037 Attached to 0000:00:10.0 00:04:23.037 Attached to 0000:00:11.0 00:04:23.037 Attached to 0000:00:13.0 00:04:23.037 Attached to 0000:00:12.0 00:04:23.037 Cleaning up... 00:04:23.037 00:04:23.037 real 0m0.228s 00:04:23.037 user 0m0.076s 00:04:23.037 sys 0m0.054s 00:04:23.037 19:53:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.037 19:53:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.037 ************************************ 00:04:23.037 END TEST env_dpdk_post_init 00:04:23.037 ************************************ 00:04:23.037 19:53:56 env -- env/env.sh@26 -- # uname 00:04:23.037 19:53:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:23.037 19:53:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.037 19:53:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.037 19:53:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.037 19:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.037 ************************************ 00:04:23.037 START TEST env_mem_callbacks 00:04:23.037 ************************************ 00:04:23.037 19:53:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.037 EAL: Detected CPU lcores: 10 00:04:23.037 EAL: Detected NUMA nodes: 1 00:04:23.037 EAL: Detected shared linkage of DPDK 00:04:23.037 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.037 EAL: Selected IOVA mode 'PA' 00:04:23.295 00:04:23.295 00:04:23.295 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.295 http://cunit.sourceforge.net/ 00:04:23.295 00:04:23.295 00:04:23.295 Suite: memory 00:04:23.295 Test: test ...TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.295 00:04:23.295 register 0x200000200000 2097152 00:04:23.295 malloc 3145728 00:04:23.295 register 0x200000400000 4194304 00:04:23.295 buf 0x2000004fffc0 len 3145728 PASSED 00:04:23.295 malloc 64 00:04:23.295 buf 0x2000004ffec0 len 64 PASSED 00:04:23.295 malloc 4194304 00:04:23.295 register 0x200000800000 6291456 00:04:23.295 buf 0x2000009fffc0 len 4194304 PASSED 00:04:23.295 free 0x2000004fffc0 3145728 00:04:23.295 free 0x2000004ffec0 64 00:04:23.295 unregister 0x200000400000 4194304 PASSED 00:04:23.295 free 0x2000009fffc0 4194304 00:04:23.295 unregister 0x200000800000 6291456 PASSED 00:04:23.295 malloc 8388608 00:04:23.295 register 0x200000400000 10485760 00:04:23.295 buf 0x2000005fffc0 len 8388608 PASSED 00:04:23.295 free 0x2000005fffc0 8388608 00:04:23.295 unregister 0x200000400000 10485760 PASSED 00:04:23.295 passed 00:04:23.295 00:04:23.295 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.295 suites 1 1 n/a 0 0 00:04:23.295 tests 1 1 1 0 0 00:04:23.295 asserts 15 15 15 0 n/a 00:04:23.295 00:04:23.295 Elapsed time = 0.048 seconds 00:04:23.295 00:04:23.295 real 0m0.216s 00:04:23.295 user 0m0.072s 00:04:23.295 sys 0m0.042s 00:04:23.295 19:53:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.295 19:53:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:23.295 ************************************ 00:04:23.295 END TEST env_mem_callbacks 00:04:23.295 ************************************ 00:04:23.295 00:04:23.295 real 0m6.231s 00:04:23.295 user 0m4.892s 00:04:23.295 sys 0m0.983s 00:04:23.295 19:53:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.295 19:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.295 ************************************ 00:04:23.295 END TEST env 00:04:23.295 ************************************ 00:04:23.295 19:53:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.295 19:53:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.295 19:53:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.295 19:53:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.295 ************************************ 00:04:23.295 START TEST rpc 00:04:23.295 ************************************ 00:04:23.295 19:53:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.295 * Looking for test storage... 00:04:23.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.295 19:53:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.295 19:53:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.295 19:53:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.553 19:53:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.553 19:53:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.553 19:53:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.553 19:53:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.553 19:53:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.553 19:53:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.553 19:53:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.553 19:53:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.553 19:53:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.553 19:53:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.553 19:53:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.553 19:53:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.553 19:53:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.553 19:53:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.553 19:53:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.553 --rc genhtml_branch_coverage=1 00:04:23.553 --rc genhtml_function_coverage=1 00:04:23.553 --rc genhtml_legend=1 00:04:23.553 --rc geninfo_all_blocks=1 00:04:23.553 --rc geninfo_unexecuted_blocks=1 00:04:23.553 00:04:23.553 ' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.553 --rc genhtml_branch_coverage=1 00:04:23.553 --rc genhtml_function_coverage=1 00:04:23.553 --rc genhtml_legend=1 00:04:23.553 --rc geninfo_all_blocks=1 00:04:23.553 --rc geninfo_unexecuted_blocks=1 00:04:23.553 00:04:23.553 ' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.553 --rc genhtml_branch_coverage=1 00:04:23.553 --rc genhtml_function_coverage=1 00:04:23.553 --rc genhtml_legend=1 00:04:23.553 --rc geninfo_all_blocks=1 00:04:23.553 --rc geninfo_unexecuted_blocks=1 00:04:23.553 00:04:23.553 ' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.553 --rc genhtml_branch_coverage=1 00:04:23.553 --rc genhtml_function_coverage=1 00:04:23.553 --rc genhtml_legend=1 00:04:23.553 --rc geninfo_all_blocks=1 00:04:23.553 --rc geninfo_unexecuted_blocks=1 00:04:23.553 00:04:23.553 ' 00:04:23.553 19:53:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57091 00:04:23.553 19:53:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.553 19:53:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57091 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 57091 ']' 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.553 19:53:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.553 19:53:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.554 19:53:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.554 19:53:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.554 19:53:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.554 [2024-11-19 19:53:57.187946] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:23.554 [2024-11-19 19:53:57.188068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57091 ] 00:04:23.812 [2024-11-19 19:53:57.346752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.812 [2024-11-19 19:53:57.446296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:23.812 [2024-11-19 19:53:57.446355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57091' to capture a snapshot of events at runtime. 00:04:23.812 [2024-11-19 19:53:57.446365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:23.812 [2024-11-19 19:53:57.446374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:23.812 [2024-11-19 19:53:57.446382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57091 for offline analysis/debug. 00:04:23.812 [2024-11-19 19:53:57.447246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.378 19:53:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.378 19:53:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:24.378 19:53:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.378 19:53:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.378 19:53:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:24.378 19:53:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:24.378 19:53:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.378 19:53:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.378 19:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 ************************************ 00:04:24.378 START TEST rpc_integrity 00:04:24.378 ************************************ 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.378 { 00:04:24.378 "name": "Malloc0", 00:04:24.378 "aliases": [ 00:04:24.378 "7cd25935-c8c9-47ea-98d2-2f7eae05502d" 00:04:24.378 ], 00:04:24.378 "product_name": "Malloc disk", 00:04:24.378 "block_size": 512, 00:04:24.378 "num_blocks": 16384, 00:04:24.378 "uuid": "7cd25935-c8c9-47ea-98d2-2f7eae05502d", 00:04:24.378 "assigned_rate_limits": { 00:04:24.378 "rw_ios_per_sec": 0, 00:04:24.378 "rw_mbytes_per_sec": 0, 00:04:24.378 "r_mbytes_per_sec": 0, 00:04:24.378 "w_mbytes_per_sec": 0 00:04:24.378 }, 00:04:24.378 "claimed": false, 00:04:24.378 "zoned": false, 00:04:24.378 "supported_io_types": { 00:04:24.378 "read": true, 00:04:24.378 "write": true, 00:04:24.378 "unmap": true, 00:04:24.378 "flush": true, 00:04:24.378 "reset": true, 00:04:24.378 "nvme_admin": false, 00:04:24.378 "nvme_io": false, 00:04:24.378 "nvme_io_md": false, 00:04:24.378 "write_zeroes": true, 00:04:24.378 "zcopy": true, 00:04:24.378 "get_zone_info": false, 00:04:24.378 "zone_management": false, 00:04:24.378 "zone_append": false, 00:04:24.378 "compare": false, 00:04:24.378 "compare_and_write": false, 00:04:24.378 "abort": true, 00:04:24.378 "seek_hole": false, 00:04:24.378 "seek_data": false, 00:04:24.378 "copy": true, 00:04:24.378 "nvme_iov_md": false 00:04:24.378 }, 00:04:24.378 "memory_domains": [ 00:04:24.378 { 00:04:24.378 "dma_device_id": "system", 00:04:24.378 "dma_device_type": 1 00:04:24.378 }, 00:04:24.378 { 00:04:24.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.378 "dma_device_type": 2 00:04:24.378 } 00:04:24.378 ], 00:04:24.378 "driver_specific": {} 00:04:24.378 } 00:04:24.378 ]' 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.378 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 [2024-11-19 19:53:58.159802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.378 [2024-11-19 19:53:58.159865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.378 [2024-11-19 19:53:58.159895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:24.378 [2024-11-19 19:53:58.159908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.378 [2024-11-19 19:53:58.162126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.378 [2024-11-19 19:53:58.162173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.378 Passthru0 00:04:24.378 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.379 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.379 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.379 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.637 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.637 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.637 { 00:04:24.637 "name": "Malloc0", 00:04:24.637 "aliases": [ 00:04:24.637 "7cd25935-c8c9-47ea-98d2-2f7eae05502d" 00:04:24.637 ], 00:04:24.637 "product_name": "Malloc disk", 00:04:24.637 "block_size": 512, 00:04:24.637 "num_blocks": 16384, 00:04:24.637 "uuid": "7cd25935-c8c9-47ea-98d2-2f7eae05502d", 00:04:24.637 "assigned_rate_limits": { 00:04:24.637 "rw_ios_per_sec": 0, 00:04:24.637 "rw_mbytes_per_sec": 0, 00:04:24.637 "r_mbytes_per_sec": 0, 00:04:24.637 "w_mbytes_per_sec": 0 00:04:24.637 }, 00:04:24.637 "claimed": true, 00:04:24.637 "claim_type": "exclusive_write", 00:04:24.637 "zoned": false, 00:04:24.637 "supported_io_types": { 00:04:24.637 "read": true, 00:04:24.637 "write": true, 00:04:24.637 "unmap": true, 00:04:24.637 "flush": true, 00:04:24.637 "reset": true, 00:04:24.637 "nvme_admin": false, 00:04:24.637 "nvme_io": false, 00:04:24.637 "nvme_io_md": false, 00:04:24.637 "write_zeroes": true, 00:04:24.637 "zcopy": true, 00:04:24.637 "get_zone_info": false, 00:04:24.637 "zone_management": false, 00:04:24.637 "zone_append": false, 00:04:24.637 "compare": false, 00:04:24.637 "compare_and_write": false, 00:04:24.637 "abort": true, 00:04:24.637 "seek_hole": false, 00:04:24.637 "seek_data": false, 00:04:24.637 "copy": true, 00:04:24.637 "nvme_iov_md": false 00:04:24.637 }, 00:04:24.637 "memory_domains": [ 00:04:24.637 { 00:04:24.637 "dma_device_id": "system", 00:04:24.637 "dma_device_type": 1 00:04:24.637 }, 00:04:24.637 { 00:04:24.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.637 "dma_device_type": 2 00:04:24.637 } 00:04:24.637 ], 00:04:24.637 "driver_specific": {} 00:04:24.637 }, 00:04:24.637 { 00:04:24.637 "name": "Passthru0", 00:04:24.637 "aliases": [ 00:04:24.637 "42aaf06d-dec2-5031-ad82-ba6b17b76545" 00:04:24.637 ], 00:04:24.637 "product_name": "passthru", 00:04:24.637 "block_size": 512, 00:04:24.637 "num_blocks": 16384, 00:04:24.637 "uuid": "42aaf06d-dec2-5031-ad82-ba6b17b76545", 00:04:24.637 "assigned_rate_limits": { 00:04:24.637 "rw_ios_per_sec": 0, 00:04:24.637 "rw_mbytes_per_sec": 0, 00:04:24.637 "r_mbytes_per_sec": 0, 00:04:24.637 "w_mbytes_per_sec": 0 00:04:24.637 }, 00:04:24.637 "claimed": false, 00:04:24.637 "zoned": false, 00:04:24.637 "supported_io_types": { 00:04:24.637 "read": true, 00:04:24.637 "write": true, 00:04:24.637 "unmap": true, 00:04:24.637 "flush": true, 00:04:24.637 "reset": true, 00:04:24.637 "nvme_admin": false, 00:04:24.637 "nvme_io": false, 00:04:24.637 "nvme_io_md": false, 00:04:24.637 "write_zeroes": true, 00:04:24.637 "zcopy": true, 00:04:24.637 "get_zone_info": false, 00:04:24.637 "zone_management": false, 00:04:24.637 "zone_append": false, 00:04:24.637 "compare": false, 00:04:24.637 "compare_and_write": false, 00:04:24.637 "abort": true, 00:04:24.637 "seek_hole": false, 00:04:24.637 "seek_data": false, 00:04:24.637 "copy": true, 00:04:24.637 "nvme_iov_md": false 00:04:24.637 }, 00:04:24.637 "memory_domains": [ 00:04:24.637 { 00:04:24.637 "dma_device_id": "system", 00:04:24.637 "dma_device_type": 1 00:04:24.637 }, 00:04:24.638 { 00:04:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.638 "dma_device_type": 2 00:04:24.638 } 00:04:24.638 ], 00:04:24.638 "driver_specific": { 00:04:24.638 "passthru": { 00:04:24.638 "name": "Passthru0", 00:04:24.638 "base_bdev_name": "Malloc0" 00:04:24.638 } 00:04:24.638 } 00:04:24.638 } 00:04:24.638 ]' 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.638 19:53:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.638 00:04:24.638 real 0m0.248s 00:04:24.638 user 0m0.131s 00:04:24.638 sys 0m0.036s 00:04:24.638 ************************************ 00:04:24.638 END TEST rpc_integrity 00:04:24.638 ************************************ 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.638 19:53:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.638 19:53:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.638 19:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 ************************************ 00:04:24.638 START TEST rpc_plugins 00:04:24.638 ************************************ 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.638 { 00:04:24.638 "name": "Malloc1", 00:04:24.638 "aliases": [ 00:04:24.638 "28d0281d-a04f-4d5d-b644-720d124a0447" 00:04:24.638 ], 00:04:24.638 "product_name": "Malloc disk", 00:04:24.638 "block_size": 4096, 00:04:24.638 "num_blocks": 256, 00:04:24.638 "uuid": "28d0281d-a04f-4d5d-b644-720d124a0447", 00:04:24.638 "assigned_rate_limits": { 00:04:24.638 "rw_ios_per_sec": 0, 00:04:24.638 "rw_mbytes_per_sec": 0, 00:04:24.638 "r_mbytes_per_sec": 0, 00:04:24.638 "w_mbytes_per_sec": 0 00:04:24.638 }, 00:04:24.638 "claimed": false, 00:04:24.638 "zoned": false, 00:04:24.638 "supported_io_types": { 00:04:24.638 "read": true, 00:04:24.638 "write": true, 00:04:24.638 "unmap": true, 00:04:24.638 "flush": true, 00:04:24.638 "reset": true, 00:04:24.638 "nvme_admin": false, 00:04:24.638 "nvme_io": false, 00:04:24.638 "nvme_io_md": false, 00:04:24.638 "write_zeroes": true, 00:04:24.638 "zcopy": true, 00:04:24.638 "get_zone_info": false, 00:04:24.638 "zone_management": false, 00:04:24.638 "zone_append": false, 00:04:24.638 "compare": false, 00:04:24.638 "compare_and_write": false, 00:04:24.638 "abort": true, 00:04:24.638 "seek_hole": false, 00:04:24.638 "seek_data": false, 00:04:24.638 "copy": true, 00:04:24.638 "nvme_iov_md": false 00:04:24.638 }, 00:04:24.638 "memory_domains": [ 00:04:24.638 { 00:04:24.638 "dma_device_id": "system", 00:04:24.638 "dma_device_type": 1 00:04:24.638 }, 00:04:24.638 { 00:04:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.638 "dma_device_type": 2 00:04:24.638 } 00:04:24.638 ], 00:04:24.638 "driver_specific": {} 00:04:24.638 } 00:04:24.638 ]' 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.638 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.897 19:53:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.897 00:04:24.897 real 0m0.111s 00:04:24.897 user 0m0.064s 00:04:24.897 sys 0m0.018s 00:04:24.897 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.897 ************************************ 00:04:24.897 END TEST rpc_plugins 00:04:24.897 19:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.897 ************************************ 00:04:24.897 19:53:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.897 19:53:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.897 19:53:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.897 19:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.897 ************************************ 00:04:24.897 START TEST rpc_trace_cmd_test 00:04:24.897 ************************************ 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.897 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57091", 00:04:24.897 "tpoint_group_mask": "0x8", 00:04:24.897 "iscsi_conn": { 00:04:24.897 "mask": "0x2", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "scsi": { 00:04:24.897 "mask": "0x4", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "bdev": { 00:04:24.897 "mask": "0x8", 00:04:24.897 "tpoint_mask": "0xffffffffffffffff" 00:04:24.897 }, 00:04:24.897 "nvmf_rdma": { 00:04:24.897 "mask": "0x10", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "nvmf_tcp": { 00:04:24.897 "mask": "0x20", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "ftl": { 00:04:24.897 "mask": "0x40", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "blobfs": { 00:04:24.897 "mask": "0x80", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "dsa": { 00:04:24.897 "mask": "0x200", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "thread": { 00:04:24.897 "mask": "0x400", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "nvme_pcie": { 00:04:24.897 "mask": "0x800", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "iaa": { 00:04:24.897 "mask": "0x1000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "nvme_tcp": { 00:04:24.897 "mask": "0x2000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "bdev_nvme": { 00:04:24.897 "mask": "0x4000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "sock": { 00:04:24.897 "mask": "0x8000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "blob": { 00:04:24.897 "mask": "0x10000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "bdev_raid": { 00:04:24.897 "mask": "0x20000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 }, 00:04:24.897 "scheduler": { 00:04:24.897 "mask": "0x40000", 00:04:24.897 "tpoint_mask": "0x0" 00:04:24.897 } 00:04:24.897 }' 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:24.897 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.898 00:04:24.898 real 0m0.171s 00:04:24.898 user 0m0.139s 00:04:24.898 sys 0m0.024s 00:04:24.898 ************************************ 00:04:24.898 END TEST rpc_trace_cmd_test 00:04:24.898 ************************************ 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.898 19:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.898 19:53:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.898 19:53:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.898 19:53:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.898 19:53:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.898 19:53:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.898 19:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.898 ************************************ 00:04:24.898 START TEST rpc_daemon_integrity 00:04:24.898 ************************************ 00:04:24.898 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:24.898 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.898 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.898 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.157 { 00:04:25.157 "name": "Malloc2", 00:04:25.157 "aliases": [ 00:04:25.157 "4b891599-91a1-42d4-ba3d-5ad2149bc72c" 00:04:25.157 ], 00:04:25.157 "product_name": "Malloc disk", 00:04:25.157 "block_size": 512, 00:04:25.157 "num_blocks": 16384, 00:04:25.157 "uuid": "4b891599-91a1-42d4-ba3d-5ad2149bc72c", 00:04:25.157 "assigned_rate_limits": { 00:04:25.157 "rw_ios_per_sec": 0, 00:04:25.157 "rw_mbytes_per_sec": 0, 00:04:25.157 "r_mbytes_per_sec": 0, 00:04:25.157 "w_mbytes_per_sec": 0 00:04:25.157 }, 00:04:25.157 "claimed": false, 00:04:25.157 "zoned": false, 00:04:25.157 "supported_io_types": { 00:04:25.157 "read": true, 00:04:25.157 "write": true, 00:04:25.157 "unmap": true, 00:04:25.157 "flush": true, 00:04:25.157 "reset": true, 00:04:25.157 "nvme_admin": false, 00:04:25.157 "nvme_io": false, 00:04:25.157 "nvme_io_md": false, 00:04:25.157 "write_zeroes": true, 00:04:25.157 "zcopy": true, 00:04:25.157 "get_zone_info": false, 00:04:25.157 "zone_management": false, 00:04:25.157 "zone_append": false, 00:04:25.157 "compare": false, 00:04:25.157 "compare_and_write": false, 00:04:25.157 "abort": true, 00:04:25.157 "seek_hole": false, 00:04:25.157 "seek_data": false, 00:04:25.157 "copy": true, 00:04:25.157 "nvme_iov_md": false 00:04:25.157 }, 00:04:25.157 "memory_domains": [ 00:04:25.157 { 00:04:25.157 "dma_device_id": "system", 00:04:25.157 "dma_device_type": 1 00:04:25.157 }, 00:04:25.157 { 00:04:25.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.157 "dma_device_type": 2 00:04:25.157 } 00:04:25.157 ], 00:04:25.157 "driver_specific": {} 00:04:25.157 } 00:04:25.157 ]' 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.157 [2024-11-19 19:53:58.791207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.157 [2024-11-19 19:53:58.791288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.157 [2024-11-19 19:53:58.791309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:25.157 [2024-11-19 19:53:58.791320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.157 [2024-11-19 19:53:58.793507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.157 [2024-11-19 19:53:58.793546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.157 Passthru0 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.157 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.158 { 00:04:25.158 "name": "Malloc2", 00:04:25.158 "aliases": [ 00:04:25.158 "4b891599-91a1-42d4-ba3d-5ad2149bc72c" 00:04:25.158 ], 00:04:25.158 "product_name": "Malloc disk", 00:04:25.158 "block_size": 512, 00:04:25.158 "num_blocks": 16384, 00:04:25.158 "uuid": "4b891599-91a1-42d4-ba3d-5ad2149bc72c", 00:04:25.158 "assigned_rate_limits": { 00:04:25.158 "rw_ios_per_sec": 0, 00:04:25.158 "rw_mbytes_per_sec": 0, 00:04:25.158 "r_mbytes_per_sec": 0, 00:04:25.158 "w_mbytes_per_sec": 0 00:04:25.158 }, 00:04:25.158 "claimed": true, 00:04:25.158 "claim_type": "exclusive_write", 00:04:25.158 "zoned": false, 00:04:25.158 "supported_io_types": { 00:04:25.158 "read": true, 00:04:25.158 "write": true, 00:04:25.158 "unmap": true, 00:04:25.158 "flush": true, 00:04:25.158 "reset": true, 00:04:25.158 "nvme_admin": false, 00:04:25.158 "nvme_io": false, 00:04:25.158 "nvme_io_md": false, 00:04:25.158 "write_zeroes": true, 00:04:25.158 "zcopy": true, 00:04:25.158 "get_zone_info": false, 00:04:25.158 "zone_management": false, 00:04:25.158 "zone_append": false, 00:04:25.158 "compare": false, 00:04:25.158 "compare_and_write": false, 00:04:25.158 "abort": true, 00:04:25.158 "seek_hole": false, 00:04:25.158 "seek_data": false, 00:04:25.158 "copy": true, 00:04:25.158 "nvme_iov_md": false 00:04:25.158 }, 00:04:25.158 "memory_domains": [ 00:04:25.158 { 00:04:25.158 "dma_device_id": "system", 00:04:25.158 "dma_device_type": 1 00:04:25.158 }, 00:04:25.158 { 00:04:25.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.158 "dma_device_type": 2 00:04:25.158 } 00:04:25.158 ], 00:04:25.158 "driver_specific": {} 00:04:25.158 }, 00:04:25.158 { 00:04:25.158 "name": "Passthru0", 00:04:25.158 "aliases": [ 00:04:25.158 "7e2c3649-f318-592d-a16b-95ac9d803ef8" 00:04:25.158 ], 00:04:25.158 "product_name": "passthru", 00:04:25.158 "block_size": 512, 00:04:25.158 "num_blocks": 16384, 00:04:25.158 "uuid": "7e2c3649-f318-592d-a16b-95ac9d803ef8", 00:04:25.158 "assigned_rate_limits": { 00:04:25.158 "rw_ios_per_sec": 0, 00:04:25.158 "rw_mbytes_per_sec": 0, 00:04:25.158 "r_mbytes_per_sec": 0, 00:04:25.158 "w_mbytes_per_sec": 0 00:04:25.158 }, 00:04:25.158 "claimed": false, 00:04:25.158 "zoned": false, 00:04:25.158 "supported_io_types": { 00:04:25.158 "read": true, 00:04:25.158 "write": true, 00:04:25.158 "unmap": true, 00:04:25.158 "flush": true, 00:04:25.158 "reset": true, 00:04:25.158 "nvme_admin": false, 00:04:25.158 "nvme_io": false, 00:04:25.158 "nvme_io_md": false, 00:04:25.158 "write_zeroes": true, 00:04:25.158 "zcopy": true, 00:04:25.158 "get_zone_info": false, 00:04:25.158 "zone_management": false, 00:04:25.158 "zone_append": false, 00:04:25.158 "compare": false, 00:04:25.158 "compare_and_write": false, 00:04:25.158 "abort": true, 00:04:25.158 "seek_hole": false, 00:04:25.158 "seek_data": false, 00:04:25.158 "copy": true, 00:04:25.158 "nvme_iov_md": false 00:04:25.158 }, 00:04:25.158 "memory_domains": [ 00:04:25.158 { 00:04:25.158 "dma_device_id": "system", 00:04:25.158 "dma_device_type": 1 00:04:25.158 }, 00:04:25.158 { 00:04:25.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.158 "dma_device_type": 2 00:04:25.158 } 00:04:25.158 ], 00:04:25.158 "driver_specific": { 00:04:25.158 "passthru": { 00:04:25.158 "name": "Passthru0", 00:04:25.158 "base_bdev_name": "Malloc2" 00:04:25.158 } 00:04:25.158 } 00:04:25.158 } 00:04:25.158 ]' 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.158 00:04:25.158 real 0m0.240s 00:04:25.158 user 0m0.133s 00:04:25.158 sys 0m0.030s 00:04:25.158 ************************************ 00:04:25.158 END TEST rpc_daemon_integrity 00:04:25.158 ************************************ 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.158 19:53:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.417 19:53:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.417 19:53:58 rpc -- rpc/rpc.sh@84 -- # killprocess 57091 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 57091 ']' 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@958 -- # kill -0 57091 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57091 00:04:25.417 killing process with pid 57091 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57091' 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@973 -- # kill 57091 00:04:25.417 19:53:58 rpc -- common/autotest_common.sh@978 -- # wait 57091 00:04:26.790 00:04:26.790 real 0m3.350s 00:04:26.790 user 0m3.775s 00:04:26.790 sys 0m0.590s 00:04:26.790 19:54:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.790 19:54:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.790 ************************************ 00:04:26.790 END TEST rpc 00:04:26.790 ************************************ 00:04:26.790 19:54:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.790 19:54:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.790 19:54:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.790 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:04:26.790 ************************************ 00:04:26.790 START TEST skip_rpc 00:04:26.790 ************************************ 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.790 * Looking for test storage... 00:04:26.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.790 19:54:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.790 --rc genhtml_branch_coverage=1 00:04:26.790 --rc genhtml_function_coverage=1 00:04:26.790 --rc genhtml_legend=1 00:04:26.790 --rc geninfo_all_blocks=1 00:04:26.790 --rc geninfo_unexecuted_blocks=1 00:04:26.790 00:04:26.790 ' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.790 --rc genhtml_branch_coverage=1 00:04:26.790 --rc genhtml_function_coverage=1 00:04:26.790 --rc genhtml_legend=1 00:04:26.790 --rc geninfo_all_blocks=1 00:04:26.790 --rc geninfo_unexecuted_blocks=1 00:04:26.790 00:04:26.790 ' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.790 --rc genhtml_branch_coverage=1 00:04:26.790 --rc genhtml_function_coverage=1 00:04:26.790 --rc genhtml_legend=1 00:04:26.790 --rc geninfo_all_blocks=1 00:04:26.790 --rc geninfo_unexecuted_blocks=1 00:04:26.790 00:04:26.790 ' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.790 --rc genhtml_branch_coverage=1 00:04:26.790 --rc genhtml_function_coverage=1 00:04:26.790 --rc genhtml_legend=1 00:04:26.790 --rc geninfo_all_blocks=1 00:04:26.790 --rc geninfo_unexecuted_blocks=1 00:04:26.790 00:04:26.790 ' 00:04:26.790 19:54:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.790 19:54:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.790 19:54:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.790 19:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.790 ************************************ 00:04:26.790 START TEST skip_rpc 00:04:26.790 ************************************ 00:04:26.790 19:54:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:26.790 19:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57303 00:04:26.790 19:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.790 19:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:26.790 19:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.048 [2024-11-19 19:54:00.586986] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:27.048 [2024-11-19 19:54:00.587084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57303 ] 00:04:27.048 [2024-11-19 19:54:00.734766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.048 [2024-11-19 19:54:00.816892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57303 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57303 ']' 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57303 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57303 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.385 killing process with pid 57303 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57303' 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57303 00:04:32.385 19:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57303 00:04:32.952 00:04:32.952 real 0m6.199s 00:04:32.952 user 0m5.840s 00:04:32.952 sys 0m0.252s 00:04:32.952 19:54:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.952 19:54:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.952 ************************************ 00:04:32.952 END TEST skip_rpc 00:04:32.952 ************************************ 00:04:33.210 19:54:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.210 19:54:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.210 19:54:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.210 19:54:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.210 ************************************ 00:04:33.210 START TEST skip_rpc_with_json 00:04:33.210 ************************************ 00:04:33.210 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:33.210 19:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.210 19:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57396 00:04:33.210 19:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57396 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57396 ']' 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.211 19:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.211 [2024-11-19 19:54:06.837941] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:33.211 [2024-11-19 19:54:06.838060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57396 ] 00:04:33.211 [2024-11-19 19:54:06.993213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.469 [2024-11-19 19:54:07.072338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.034 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.034 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.035 [2024-11-19 19:54:07.670973] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.035 request: 00:04:34.035 { 00:04:34.035 "trtype": "tcp", 00:04:34.035 "method": "nvmf_get_transports", 00:04:34.035 "req_id": 1 00:04:34.035 } 00:04:34.035 Got JSON-RPC error response 00:04:34.035 response: 00:04:34.035 { 00:04:34.035 "code": -19, 00:04:34.035 "message": "No such device" 00:04:34.035 } 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.035 [2024-11-19 19:54:07.679057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.035 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.293 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.294 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.294 { 00:04:34.294 "subsystems": [ 00:04:34.294 { 00:04:34.294 "subsystem": "fsdev", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "fsdev_set_opts", 00:04:34.294 "params": { 00:04:34.294 "fsdev_io_pool_size": 65535, 00:04:34.294 "fsdev_io_cache_size": 256 00:04:34.294 } 00:04:34.294 } 00:04:34.294 ] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "keyring", 00:04:34.294 "config": [] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "iobuf", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "iobuf_set_options", 00:04:34.294 "params": { 00:04:34.294 "small_pool_count": 8192, 00:04:34.294 "large_pool_count": 1024, 00:04:34.294 "small_bufsize": 8192, 00:04:34.294 "large_bufsize": 135168, 00:04:34.294 "enable_numa": false 00:04:34.294 } 00:04:34.294 } 00:04:34.294 ] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "sock", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "sock_set_default_impl", 00:04:34.294 "params": { 00:04:34.294 "impl_name": "posix" 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "sock_impl_set_options", 00:04:34.294 "params": { 00:04:34.294 "impl_name": "ssl", 00:04:34.294 "recv_buf_size": 4096, 00:04:34.294 "send_buf_size": 4096, 00:04:34.294 "enable_recv_pipe": true, 00:04:34.294 "enable_quickack": false, 00:04:34.294 "enable_placement_id": 0, 00:04:34.294 "enable_zerocopy_send_server": true, 00:04:34.294 "enable_zerocopy_send_client": false, 00:04:34.294 "zerocopy_threshold": 0, 00:04:34.294 "tls_version": 0, 00:04:34.294 "enable_ktls": false 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "sock_impl_set_options", 00:04:34.294 "params": { 00:04:34.294 "impl_name": "posix", 00:04:34.294 "recv_buf_size": 2097152, 00:04:34.294 "send_buf_size": 2097152, 00:04:34.294 "enable_recv_pipe": true, 00:04:34.294 "enable_quickack": false, 00:04:34.294 "enable_placement_id": 0, 00:04:34.294 "enable_zerocopy_send_server": true, 00:04:34.294 "enable_zerocopy_send_client": false, 00:04:34.294 "zerocopy_threshold": 0, 00:04:34.294 "tls_version": 0, 00:04:34.294 "enable_ktls": false 00:04:34.294 } 00:04:34.294 } 00:04:34.294 ] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "vmd", 00:04:34.294 "config": [] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "accel", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "accel_set_options", 00:04:34.294 "params": { 00:04:34.294 "small_cache_size": 128, 00:04:34.294 "large_cache_size": 16, 00:04:34.294 "task_count": 2048, 00:04:34.294 "sequence_count": 2048, 00:04:34.294 "buf_count": 2048 00:04:34.294 } 00:04:34.294 } 00:04:34.294 ] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "bdev", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "bdev_set_options", 00:04:34.294 "params": { 00:04:34.294 "bdev_io_pool_size": 65535, 00:04:34.294 "bdev_io_cache_size": 256, 00:04:34.294 "bdev_auto_examine": true, 00:04:34.294 "iobuf_small_cache_size": 128, 00:04:34.294 "iobuf_large_cache_size": 16 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "bdev_raid_set_options", 00:04:34.294 "params": { 00:04:34.294 "process_window_size_kb": 1024, 00:04:34.294 "process_max_bandwidth_mb_sec": 0 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "bdev_iscsi_set_options", 00:04:34.294 "params": { 00:04:34.294 "timeout_sec": 30 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "bdev_nvme_set_options", 00:04:34.294 "params": { 00:04:34.294 "action_on_timeout": "none", 00:04:34.294 "timeout_us": 0, 00:04:34.294 "timeout_admin_us": 0, 00:04:34.294 "keep_alive_timeout_ms": 10000, 00:04:34.294 "arbitration_burst": 0, 00:04:34.294 "low_priority_weight": 0, 00:04:34.294 "medium_priority_weight": 0, 00:04:34.294 "high_priority_weight": 0, 00:04:34.294 "nvme_adminq_poll_period_us": 10000, 00:04:34.294 "nvme_ioq_poll_period_us": 0, 00:04:34.294 "io_queue_requests": 0, 00:04:34.294 "delay_cmd_submit": true, 00:04:34.294 "transport_retry_count": 4, 00:04:34.294 "bdev_retry_count": 3, 00:04:34.294 "transport_ack_timeout": 0, 00:04:34.294 "ctrlr_loss_timeout_sec": 0, 00:04:34.294 "reconnect_delay_sec": 0, 00:04:34.294 "fast_io_fail_timeout_sec": 0, 00:04:34.294 "disable_auto_failback": false, 00:04:34.294 "generate_uuids": false, 00:04:34.294 "transport_tos": 0, 00:04:34.294 "nvme_error_stat": false, 00:04:34.294 "rdma_srq_size": 0, 00:04:34.294 "io_path_stat": false, 00:04:34.294 "allow_accel_sequence": false, 00:04:34.294 "rdma_max_cq_size": 0, 00:04:34.294 "rdma_cm_event_timeout_ms": 0, 00:04:34.294 "dhchap_digests": [ 00:04:34.294 "sha256", 00:04:34.294 "sha384", 00:04:34.294 "sha512" 00:04:34.294 ], 00:04:34.294 "dhchap_dhgroups": [ 00:04:34.294 "null", 00:04:34.294 "ffdhe2048", 00:04:34.294 "ffdhe3072", 00:04:34.294 "ffdhe4096", 00:04:34.294 "ffdhe6144", 00:04:34.294 "ffdhe8192" 00:04:34.294 ] 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "bdev_nvme_set_hotplug", 00:04:34.294 "params": { 00:04:34.294 "period_us": 100000, 00:04:34.294 "enable": false 00:04:34.294 } 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "method": "bdev_wait_for_examine" 00:04:34.294 } 00:04:34.294 ] 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "scsi", 00:04:34.294 "config": null 00:04:34.294 }, 00:04:34.294 { 00:04:34.294 "subsystem": "scheduler", 00:04:34.294 "config": [ 00:04:34.294 { 00:04:34.294 "method": "framework_set_scheduler", 00:04:34.295 "params": { 00:04:34.295 "name": "static" 00:04:34.295 } 00:04:34.295 } 00:04:34.295 ] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "vhost_scsi", 00:04:34.295 "config": [] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "vhost_blk", 00:04:34.295 "config": [] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "ublk", 00:04:34.295 "config": [] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "nbd", 00:04:34.295 "config": [] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "nvmf", 00:04:34.295 "config": [ 00:04:34.295 { 00:04:34.295 "method": "nvmf_set_config", 00:04:34.295 "params": { 00:04:34.295 "discovery_filter": "match_any", 00:04:34.295 "admin_cmd_passthru": { 00:04:34.295 "identify_ctrlr": false 00:04:34.295 }, 00:04:34.295 "dhchap_digests": [ 00:04:34.295 "sha256", 00:04:34.295 "sha384", 00:04:34.295 "sha512" 00:04:34.295 ], 00:04:34.295 "dhchap_dhgroups": [ 00:04:34.295 "null", 00:04:34.295 "ffdhe2048", 00:04:34.295 "ffdhe3072", 00:04:34.295 "ffdhe4096", 00:04:34.295 "ffdhe6144", 00:04:34.295 "ffdhe8192" 00:04:34.295 ] 00:04:34.295 } 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "method": "nvmf_set_max_subsystems", 00:04:34.295 "params": { 00:04:34.295 "max_subsystems": 1024 00:04:34.295 } 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "method": "nvmf_set_crdt", 00:04:34.295 "params": { 00:04:34.295 "crdt1": 0, 00:04:34.295 "crdt2": 0, 00:04:34.295 "crdt3": 0 00:04:34.295 } 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "method": "nvmf_create_transport", 00:04:34.295 "params": { 00:04:34.295 "trtype": "TCP", 00:04:34.295 "max_queue_depth": 128, 00:04:34.295 "max_io_qpairs_per_ctrlr": 127, 00:04:34.295 "in_capsule_data_size": 4096, 00:04:34.295 "max_io_size": 131072, 00:04:34.295 "io_unit_size": 131072, 00:04:34.295 "max_aq_depth": 128, 00:04:34.295 "num_shared_buffers": 511, 00:04:34.295 "buf_cache_size": 4294967295, 00:04:34.295 "dif_insert_or_strip": false, 00:04:34.295 "zcopy": false, 00:04:34.295 "c2h_success": true, 00:04:34.295 "sock_priority": 0, 00:04:34.295 "abort_timeout_sec": 1, 00:04:34.295 "ack_timeout": 0, 00:04:34.295 "data_wr_pool_size": 0 00:04:34.295 } 00:04:34.295 } 00:04:34.295 ] 00:04:34.295 }, 00:04:34.295 { 00:04:34.295 "subsystem": "iscsi", 00:04:34.295 "config": [ 00:04:34.295 { 00:04:34.295 "method": "iscsi_set_options", 00:04:34.295 "params": { 00:04:34.295 "node_base": "iqn.2016-06.io.spdk", 00:04:34.295 "max_sessions": 128, 00:04:34.295 "max_connections_per_session": 2, 00:04:34.295 "max_queue_depth": 64, 00:04:34.295 "default_time2wait": 2, 00:04:34.295 "default_time2retain": 20, 00:04:34.295 "first_burst_length": 8192, 00:04:34.295 "immediate_data": true, 00:04:34.295 "allow_duplicated_isid": false, 00:04:34.295 "error_recovery_level": 0, 00:04:34.295 "nop_timeout": 60, 00:04:34.295 "nop_in_interval": 30, 00:04:34.295 "disable_chap": false, 00:04:34.295 "require_chap": false, 00:04:34.295 "mutual_chap": false, 00:04:34.295 "chap_group": 0, 00:04:34.295 "max_large_datain_per_connection": 64, 00:04:34.295 "max_r2t_per_connection": 4, 00:04:34.295 "pdu_pool_size": 36864, 00:04:34.295 "immediate_data_pool_size": 16384, 00:04:34.295 "data_out_pool_size": 2048 00:04:34.295 } 00:04:34.295 } 00:04:34.295 ] 00:04:34.295 } 00:04:34.295 ] 00:04:34.295 } 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57396 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57396 ']' 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57396 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57396 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57396' 00:04:34.295 killing process with pid 57396 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57396 00:04:34.295 19:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57396 00:04:35.669 19:54:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57430 00:04:35.669 19:54:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:35.669 19:54:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57430 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57430 ']' 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57430 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57430 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.933 killing process with pid 57430 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57430' 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57430 00:04:40.933 19:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57430 00:04:41.503 19:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.503 19:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.503 00:04:41.503 real 0m8.489s 00:04:41.503 user 0m8.113s 00:04:41.503 sys 0m0.595s 00:04:41.503 19:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.503 19:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.503 ************************************ 00:04:41.503 END TEST skip_rpc_with_json 00:04:41.503 ************************************ 00:04:41.764 19:54:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.764 ************************************ 00:04:41.764 START TEST skip_rpc_with_delay 00:04:41.764 ************************************ 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.764 [2024-11-19 19:54:15.378285] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.764 00:04:41.764 real 0m0.113s 00:04:41.764 user 0m0.055s 00:04:41.764 sys 0m0.056s 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.764 19:54:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:41.764 ************************************ 00:04:41.764 END TEST skip_rpc_with_delay 00:04:41.764 ************************************ 00:04:41.764 19:54:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.764 19:54:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.764 19:54:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.764 19:54:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.764 ************************************ 00:04:41.764 START TEST exit_on_failed_rpc_init 00:04:41.764 ************************************ 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57553 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57553 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57553 ']' 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.764 19:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.764 [2024-11-19 19:54:15.543014] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:41.765 [2024-11-19 19:54:15.543135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57553 ] 00:04:42.023 [2024-11-19 19:54:15.697878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.023 [2024-11-19 19:54:15.777237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.592 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.851 [2024-11-19 19:54:16.448951] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:42.851 [2024-11-19 19:54:16.449300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57570 ] 00:04:42.851 [2024-11-19 19:54:16.608126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.110 [2024-11-19 19:54:16.704513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.110 [2024-11-19 19:54:16.704594] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.110 [2024-11-19 19:54:16.704607] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.110 [2024-11-19 19:54:16.704620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57553 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57553 ']' 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57553 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.110 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57553 00:04:43.368 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.368 killing process with pid 57553 00:04:43.368 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.368 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57553' 00:04:43.368 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57553 00:04:43.368 19:54:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57553 00:04:44.350 00:04:44.350 real 0m2.582s 00:04:44.350 user 0m2.921s 00:04:44.350 sys 0m0.360s 00:04:44.350 19:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.350 19:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 ************************************ 00:04:44.350 END TEST exit_on_failed_rpc_init 00:04:44.350 ************************************ 00:04:44.350 19:54:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.350 00:04:44.350 real 0m17.726s 00:04:44.350 user 0m17.067s 00:04:44.350 sys 0m1.437s 00:04:44.350 19:54:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.350 ************************************ 00:04:44.350 END TEST skip_rpc 00:04:44.350 ************************************ 00:04:44.350 19:54:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 19:54:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.350 19:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.350 19:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.350 19:54:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.610 ************************************ 00:04:44.610 START TEST rpc_client 00:04:44.610 ************************************ 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.610 * Looking for test storage... 00:04:44.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.610 19:54:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.610 --rc genhtml_branch_coverage=1 00:04:44.610 --rc genhtml_function_coverage=1 00:04:44.610 --rc genhtml_legend=1 00:04:44.610 --rc geninfo_all_blocks=1 00:04:44.610 --rc geninfo_unexecuted_blocks=1 00:04:44.610 00:04:44.610 ' 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.610 --rc genhtml_branch_coverage=1 00:04:44.610 --rc genhtml_function_coverage=1 00:04:44.610 --rc genhtml_legend=1 00:04:44.610 --rc geninfo_all_blocks=1 00:04:44.610 --rc geninfo_unexecuted_blocks=1 00:04:44.610 00:04:44.610 ' 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.610 --rc genhtml_branch_coverage=1 00:04:44.610 --rc genhtml_function_coverage=1 00:04:44.610 --rc genhtml_legend=1 00:04:44.610 --rc geninfo_all_blocks=1 00:04:44.610 --rc geninfo_unexecuted_blocks=1 00:04:44.610 00:04:44.610 ' 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.610 --rc genhtml_branch_coverage=1 00:04:44.610 --rc genhtml_function_coverage=1 00:04:44.610 --rc genhtml_legend=1 00:04:44.610 --rc geninfo_all_blocks=1 00:04:44.610 --rc geninfo_unexecuted_blocks=1 00:04:44.610 00:04:44.610 ' 00:04:44.610 19:54:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:44.610 OK 00:04:44.610 19:54:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.610 00:04:44.610 real 0m0.187s 00:04:44.610 user 0m0.109s 00:04:44.610 sys 0m0.085s 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.610 ************************************ 00:04:44.610 END TEST rpc_client 00:04:44.610 ************************************ 00:04:44.610 19:54:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.610 19:54:18 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.610 19:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.610 19:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.610 19:54:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.610 ************************************ 00:04:44.610 START TEST json_config 00:04:44.610 ************************************ 00:04:44.610 19:54:18 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.872 19:54:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.872 19:54:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.872 19:54:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.872 19:54:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.872 19:54:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.872 19:54:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.872 19:54:18 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.872 19:54:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.872 19:54:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.872 19:54:18 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.872 19:54:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.872 19:54:18 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.872 19:54:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.872 19:54:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.872 19:54:18 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.872 --rc genhtml_branch_coverage=1 00:04:44.872 --rc genhtml_function_coverage=1 00:04:44.872 --rc genhtml_legend=1 00:04:44.872 --rc geninfo_all_blocks=1 00:04:44.872 --rc geninfo_unexecuted_blocks=1 00:04:44.872 00:04:44.872 ' 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.872 --rc genhtml_branch_coverage=1 00:04:44.872 --rc genhtml_function_coverage=1 00:04:44.872 --rc genhtml_legend=1 00:04:44.872 --rc geninfo_all_blocks=1 00:04:44.872 --rc geninfo_unexecuted_blocks=1 00:04:44.872 00:04:44.872 ' 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.872 --rc genhtml_branch_coverage=1 00:04:44.872 --rc genhtml_function_coverage=1 00:04:44.872 --rc genhtml_legend=1 00:04:44.872 --rc geninfo_all_blocks=1 00:04:44.872 --rc geninfo_unexecuted_blocks=1 00:04:44.872 00:04:44.872 ' 00:04:44.872 19:54:18 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.872 --rc genhtml_branch_coverage=1 00:04:44.872 --rc genhtml_function_coverage=1 00:04:44.872 --rc genhtml_legend=1 00:04:44.872 --rc geninfo_all_blocks=1 00:04:44.872 --rc geninfo_unexecuted_blocks=1 00:04:44.872 00:04:44.872 ' 00:04:44.872 19:54:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11db6e10-0359-4a07-932f-9b365e9860cf 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=11db6e10-0359-4a07-932f-9b365e9860cf 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.872 19:54:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.872 19:54:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.872 19:54:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.872 19:54:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.872 19:54:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.872 19:54:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.872 19:54:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.873 19:54:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.873 19:54:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.873 19:54:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.873 19:54:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.873 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.873 19:54:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.873 00:04:44.873 real 0m0.138s 00:04:44.873 user 0m0.093s 00:04:44.873 sys 0m0.049s 00:04:44.873 19:54:18 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.873 19:54:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.873 ************************************ 00:04:44.873 END TEST json_config 00:04:44.873 ************************************ 00:04:44.873 19:54:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.873 19:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.873 19:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.873 19:54:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.873 ************************************ 00:04:44.873 START TEST json_config_extra_key 00:04:44.873 ************************************ 00:04:44.873 19:54:18 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.873 19:54:18 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.873 19:54:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.873 19:54:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.134 --rc genhtml_branch_coverage=1 00:04:45.134 --rc genhtml_function_coverage=1 00:04:45.134 --rc genhtml_legend=1 00:04:45.134 --rc geninfo_all_blocks=1 00:04:45.134 --rc geninfo_unexecuted_blocks=1 00:04:45.134 00:04:45.134 ' 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.134 --rc genhtml_branch_coverage=1 00:04:45.134 --rc genhtml_function_coverage=1 00:04:45.134 --rc genhtml_legend=1 00:04:45.134 --rc geninfo_all_blocks=1 00:04:45.134 --rc geninfo_unexecuted_blocks=1 00:04:45.134 00:04:45.134 ' 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.134 --rc genhtml_branch_coverage=1 00:04:45.134 --rc genhtml_function_coverage=1 00:04:45.134 --rc genhtml_legend=1 00:04:45.134 --rc geninfo_all_blocks=1 00:04:45.134 --rc geninfo_unexecuted_blocks=1 00:04:45.134 00:04:45.134 ' 00:04:45.134 19:54:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.134 --rc genhtml_branch_coverage=1 00:04:45.134 --rc genhtml_function_coverage=1 00:04:45.134 --rc genhtml_legend=1 00:04:45.134 --rc geninfo_all_blocks=1 00:04:45.134 --rc geninfo_unexecuted_blocks=1 00:04:45.134 00:04:45.134 ' 00:04:45.134 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11db6e10-0359-4a07-932f-9b365e9860cf 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=11db6e10-0359-4a07-932f-9b365e9860cf 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.134 19:54:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.134 19:54:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.134 19:54:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.134 19:54:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.135 19:54:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.135 19:54:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.135 19:54:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.135 19:54:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.135 INFO: launching applications... 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.135 19:54:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.135 Waiting for target to run... 00:04:45.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57759 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57759 /var/tmp/spdk_tgt.sock 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57759 ']' 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.135 19:54:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.135 19:54:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.135 [2024-11-19 19:54:18.775780] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:45.135 [2024-11-19 19:54:18.775906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57759 ] 00:04:45.395 [2024-11-19 19:54:19.095905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.395 [2024-11-19 19:54:19.184033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.965 19:54:19 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.965 19:54:19 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.965 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.965 INFO: shutting down applications... 00:04:45.965 19:54:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.965 19:54:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57759 ]] 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57759 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57759 00:04:45.965 19:54:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.534 19:54:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.534 19:54:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.534 19:54:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57759 00:04:46.534 19:54:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.104 19:54:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.104 19:54:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.104 19:54:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57759 00:04:47.104 19:54:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.674 19:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.674 19:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.674 19:54:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57759 00:04:47.674 19:54:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57759 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.934 SPDK target shutdown done 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.934 19:54:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.934 Success 00:04:47.934 19:54:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.934 00:04:47.934 real 0m3.131s 00:04:47.934 user 0m2.691s 00:04:47.934 sys 0m0.373s 00:04:47.934 19:54:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.934 ************************************ 00:04:47.934 END TEST json_config_extra_key 00:04:47.934 ************************************ 00:04:47.934 19:54:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.194 19:54:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.194 19:54:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.194 19:54:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.194 19:54:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.194 ************************************ 00:04:48.194 START TEST alias_rpc 00:04:48.194 ************************************ 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.194 * Looking for test storage... 00:04:48.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.194 19:54:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.194 --rc genhtml_branch_coverage=1 00:04:48.194 --rc genhtml_function_coverage=1 00:04:48.194 --rc genhtml_legend=1 00:04:48.194 --rc geninfo_all_blocks=1 00:04:48.194 --rc geninfo_unexecuted_blocks=1 00:04:48.194 00:04:48.194 ' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.194 --rc genhtml_branch_coverage=1 00:04:48.194 --rc genhtml_function_coverage=1 00:04:48.194 --rc genhtml_legend=1 00:04:48.194 --rc geninfo_all_blocks=1 00:04:48.194 --rc geninfo_unexecuted_blocks=1 00:04:48.194 00:04:48.194 ' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.194 --rc genhtml_branch_coverage=1 00:04:48.194 --rc genhtml_function_coverage=1 00:04:48.194 --rc genhtml_legend=1 00:04:48.194 --rc geninfo_all_blocks=1 00:04:48.194 --rc geninfo_unexecuted_blocks=1 00:04:48.194 00:04:48.194 ' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.194 --rc genhtml_branch_coverage=1 00:04:48.194 --rc genhtml_function_coverage=1 00:04:48.194 --rc genhtml_legend=1 00:04:48.194 --rc geninfo_all_blocks=1 00:04:48.194 --rc geninfo_unexecuted_blocks=1 00:04:48.194 00:04:48.194 ' 00:04:48.194 19:54:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.194 19:54:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57852 00:04:48.194 19:54:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57852 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57852 ']' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.194 19:54:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.194 19:54:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.453 [2024-11-19 19:54:21.992124] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:48.453 [2024-11-19 19:54:21.992270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57852 ] 00:04:48.453 [2024-11-19 19:54:22.142424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.453 [2024-11-19 19:54:22.221201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.018 19:54:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.018 19:54:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.018 19:54:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.275 19:54:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57852 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57852 ']' 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57852 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57852 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.275 killing process with pid 57852 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57852' 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 57852 00:04:49.275 19:54:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 57852 00:04:50.651 00:04:50.651 real 0m2.436s 00:04:50.651 user 0m2.542s 00:04:50.651 sys 0m0.360s 00:04:50.651 19:54:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.651 19:54:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 ************************************ 00:04:50.651 END TEST alias_rpc 00:04:50.651 ************************************ 00:04:50.651 19:54:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:50.651 19:54:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.651 19:54:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.651 19:54:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.651 19:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 ************************************ 00:04:50.651 START TEST spdkcli_tcp 00:04:50.651 ************************************ 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.651 * Looking for test storage... 00:04:50.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.651 19:54:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.651 --rc genhtml_branch_coverage=1 00:04:50.651 --rc genhtml_function_coverage=1 00:04:50.651 --rc genhtml_legend=1 00:04:50.651 --rc geninfo_all_blocks=1 00:04:50.651 --rc geninfo_unexecuted_blocks=1 00:04:50.651 00:04:50.651 ' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.651 --rc genhtml_branch_coverage=1 00:04:50.651 --rc genhtml_function_coverage=1 00:04:50.651 --rc genhtml_legend=1 00:04:50.651 --rc geninfo_all_blocks=1 00:04:50.651 --rc geninfo_unexecuted_blocks=1 00:04:50.651 00:04:50.651 ' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.651 --rc genhtml_branch_coverage=1 00:04:50.651 --rc genhtml_function_coverage=1 00:04:50.651 --rc genhtml_legend=1 00:04:50.651 --rc geninfo_all_blocks=1 00:04:50.651 --rc geninfo_unexecuted_blocks=1 00:04:50.651 00:04:50.651 ' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.651 --rc genhtml_branch_coverage=1 00:04:50.651 --rc genhtml_function_coverage=1 00:04:50.651 --rc genhtml_legend=1 00:04:50.651 --rc geninfo_all_blocks=1 00:04:50.651 --rc geninfo_unexecuted_blocks=1 00:04:50.651 00:04:50.651 ' 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57942 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57942 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57942 ']' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.651 19:54:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 19:54:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.909 [2024-11-19 19:54:24.459453] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:50.909 [2024-11-19 19:54:24.459577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57942 ] 00:04:50.909 [2024-11-19 19:54:24.614244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.909 [2024-11-19 19:54:24.694618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.909 [2024-11-19 19:54:24.694698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.843 19:54:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.843 19:54:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:51.843 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57959 00:04:51.843 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:51.843 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.843 [ 00:04:51.843 "bdev_malloc_delete", 00:04:51.843 "bdev_malloc_create", 00:04:51.843 "bdev_null_resize", 00:04:51.843 "bdev_null_delete", 00:04:51.843 "bdev_null_create", 00:04:51.843 "bdev_nvme_cuse_unregister", 00:04:51.843 "bdev_nvme_cuse_register", 00:04:51.843 "bdev_opal_new_user", 00:04:51.843 "bdev_opal_set_lock_state", 00:04:51.843 "bdev_opal_delete", 00:04:51.843 "bdev_opal_get_info", 00:04:51.843 "bdev_opal_create", 00:04:51.843 "bdev_nvme_opal_revert", 00:04:51.843 "bdev_nvme_opal_init", 00:04:51.843 "bdev_nvme_send_cmd", 00:04:51.843 "bdev_nvme_set_keys", 00:04:51.843 "bdev_nvme_get_path_iostat", 00:04:51.843 "bdev_nvme_get_mdns_discovery_info", 00:04:51.844 "bdev_nvme_stop_mdns_discovery", 00:04:51.844 "bdev_nvme_start_mdns_discovery", 00:04:51.844 "bdev_nvme_set_multipath_policy", 00:04:51.844 "bdev_nvme_set_preferred_path", 00:04:51.844 "bdev_nvme_get_io_paths", 00:04:51.844 "bdev_nvme_remove_error_injection", 00:04:51.844 "bdev_nvme_add_error_injection", 00:04:51.844 "bdev_nvme_get_discovery_info", 00:04:51.844 "bdev_nvme_stop_discovery", 00:04:51.844 "bdev_nvme_start_discovery", 00:04:51.844 "bdev_nvme_get_controller_health_info", 00:04:51.844 "bdev_nvme_disable_controller", 00:04:51.844 "bdev_nvme_enable_controller", 00:04:51.844 "bdev_nvme_reset_controller", 00:04:51.844 "bdev_nvme_get_transport_statistics", 00:04:51.844 "bdev_nvme_apply_firmware", 00:04:51.844 "bdev_nvme_detach_controller", 00:04:51.844 "bdev_nvme_get_controllers", 00:04:51.844 "bdev_nvme_attach_controller", 00:04:51.844 "bdev_nvme_set_hotplug", 00:04:51.844 "bdev_nvme_set_options", 00:04:51.844 "bdev_passthru_delete", 00:04:51.844 "bdev_passthru_create", 00:04:51.844 "bdev_lvol_set_parent_bdev", 00:04:51.844 "bdev_lvol_set_parent", 00:04:51.844 "bdev_lvol_check_shallow_copy", 00:04:51.844 "bdev_lvol_start_shallow_copy", 00:04:51.844 "bdev_lvol_grow_lvstore", 00:04:51.844 "bdev_lvol_get_lvols", 00:04:51.844 "bdev_lvol_get_lvstores", 00:04:51.844 "bdev_lvol_delete", 00:04:51.844 "bdev_lvol_set_read_only", 00:04:51.844 "bdev_lvol_resize", 00:04:51.844 "bdev_lvol_decouple_parent", 00:04:51.844 "bdev_lvol_inflate", 00:04:51.844 "bdev_lvol_rename", 00:04:51.844 "bdev_lvol_clone_bdev", 00:04:51.844 "bdev_lvol_clone", 00:04:51.844 "bdev_lvol_snapshot", 00:04:51.844 "bdev_lvol_create", 00:04:51.844 "bdev_lvol_delete_lvstore", 00:04:51.844 "bdev_lvol_rename_lvstore", 00:04:51.844 "bdev_lvol_create_lvstore", 00:04:51.844 "bdev_raid_set_options", 00:04:51.844 "bdev_raid_remove_base_bdev", 00:04:51.844 "bdev_raid_add_base_bdev", 00:04:51.844 "bdev_raid_delete", 00:04:51.844 "bdev_raid_create", 00:04:51.844 "bdev_raid_get_bdevs", 00:04:51.844 "bdev_error_inject_error", 00:04:51.844 "bdev_error_delete", 00:04:51.844 "bdev_error_create", 00:04:51.844 "bdev_split_delete", 00:04:51.844 "bdev_split_create", 00:04:51.844 "bdev_delay_delete", 00:04:51.844 "bdev_delay_create", 00:04:51.844 "bdev_delay_update_latency", 00:04:51.844 "bdev_zone_block_delete", 00:04:51.844 "bdev_zone_block_create", 00:04:51.844 "blobfs_create", 00:04:51.844 "blobfs_detect", 00:04:51.844 "blobfs_set_cache_size", 00:04:51.844 "bdev_xnvme_delete", 00:04:51.844 "bdev_xnvme_create", 00:04:51.844 "bdev_aio_delete", 00:04:51.844 "bdev_aio_rescan", 00:04:51.844 "bdev_aio_create", 00:04:51.844 "bdev_ftl_set_property", 00:04:51.844 "bdev_ftl_get_properties", 00:04:51.844 "bdev_ftl_get_stats", 00:04:51.844 "bdev_ftl_unmap", 00:04:51.844 "bdev_ftl_unload", 00:04:51.844 "bdev_ftl_delete", 00:04:51.844 "bdev_ftl_load", 00:04:51.844 "bdev_ftl_create", 00:04:51.844 "bdev_virtio_attach_controller", 00:04:51.844 "bdev_virtio_scsi_get_devices", 00:04:51.844 "bdev_virtio_detach_controller", 00:04:51.844 "bdev_virtio_blk_set_hotplug", 00:04:51.844 "bdev_iscsi_delete", 00:04:51.844 "bdev_iscsi_create", 00:04:51.844 "bdev_iscsi_set_options", 00:04:51.844 "accel_error_inject_error", 00:04:51.844 "ioat_scan_accel_module", 00:04:51.844 "dsa_scan_accel_module", 00:04:51.844 "iaa_scan_accel_module", 00:04:51.844 "keyring_file_remove_key", 00:04:51.844 "keyring_file_add_key", 00:04:51.844 "keyring_linux_set_options", 00:04:51.844 "fsdev_aio_delete", 00:04:51.844 "fsdev_aio_create", 00:04:51.844 "iscsi_get_histogram", 00:04:51.844 "iscsi_enable_histogram", 00:04:51.844 "iscsi_set_options", 00:04:51.844 "iscsi_get_auth_groups", 00:04:51.844 "iscsi_auth_group_remove_secret", 00:04:51.844 "iscsi_auth_group_add_secret", 00:04:51.844 "iscsi_delete_auth_group", 00:04:51.844 "iscsi_create_auth_group", 00:04:51.844 "iscsi_set_discovery_auth", 00:04:51.844 "iscsi_get_options", 00:04:51.844 "iscsi_target_node_request_logout", 00:04:51.844 "iscsi_target_node_set_redirect", 00:04:51.844 "iscsi_target_node_set_auth", 00:04:51.844 "iscsi_target_node_add_lun", 00:04:51.844 "iscsi_get_stats", 00:04:51.844 "iscsi_get_connections", 00:04:51.844 "iscsi_portal_group_set_auth", 00:04:51.844 "iscsi_start_portal_group", 00:04:51.844 "iscsi_delete_portal_group", 00:04:51.844 "iscsi_create_portal_group", 00:04:51.844 "iscsi_get_portal_groups", 00:04:51.844 "iscsi_delete_target_node", 00:04:51.844 "iscsi_target_node_remove_pg_ig_maps", 00:04:51.844 "iscsi_target_node_add_pg_ig_maps", 00:04:51.844 "iscsi_create_target_node", 00:04:51.844 "iscsi_get_target_nodes", 00:04:51.844 "iscsi_delete_initiator_group", 00:04:51.844 "iscsi_initiator_group_remove_initiators", 00:04:51.844 "iscsi_initiator_group_add_initiators", 00:04:51.844 "iscsi_create_initiator_group", 00:04:51.844 "iscsi_get_initiator_groups", 00:04:51.844 "nvmf_set_crdt", 00:04:51.844 "nvmf_set_config", 00:04:51.844 "nvmf_set_max_subsystems", 00:04:51.844 "nvmf_stop_mdns_prr", 00:04:51.844 "nvmf_publish_mdns_prr", 00:04:51.844 "nvmf_subsystem_get_listeners", 00:04:51.844 "nvmf_subsystem_get_qpairs", 00:04:51.844 "nvmf_subsystem_get_controllers", 00:04:51.844 "nvmf_get_stats", 00:04:51.844 "nvmf_get_transports", 00:04:51.844 "nvmf_create_transport", 00:04:51.844 "nvmf_get_targets", 00:04:51.844 "nvmf_delete_target", 00:04:51.844 "nvmf_create_target", 00:04:51.844 "nvmf_subsystem_allow_any_host", 00:04:51.844 "nvmf_subsystem_set_keys", 00:04:51.844 "nvmf_subsystem_remove_host", 00:04:51.844 "nvmf_subsystem_add_host", 00:04:51.844 "nvmf_ns_remove_host", 00:04:51.844 "nvmf_ns_add_host", 00:04:51.844 "nvmf_subsystem_remove_ns", 00:04:51.844 "nvmf_subsystem_set_ns_ana_group", 00:04:51.844 "nvmf_subsystem_add_ns", 00:04:51.844 "nvmf_subsystem_listener_set_ana_state", 00:04:51.844 "nvmf_discovery_get_referrals", 00:04:51.844 "nvmf_discovery_remove_referral", 00:04:51.844 "nvmf_discovery_add_referral", 00:04:51.844 "nvmf_subsystem_remove_listener", 00:04:51.844 "nvmf_subsystem_add_listener", 00:04:51.844 "nvmf_delete_subsystem", 00:04:51.844 "nvmf_create_subsystem", 00:04:51.844 "nvmf_get_subsystems", 00:04:51.844 "env_dpdk_get_mem_stats", 00:04:51.844 "nbd_get_disks", 00:04:51.844 "nbd_stop_disk", 00:04:51.844 "nbd_start_disk", 00:04:51.844 "ublk_recover_disk", 00:04:51.844 "ublk_get_disks", 00:04:51.844 "ublk_stop_disk", 00:04:51.844 "ublk_start_disk", 00:04:51.844 "ublk_destroy_target", 00:04:51.844 "ublk_create_target", 00:04:51.844 "virtio_blk_create_transport", 00:04:51.844 "virtio_blk_get_transports", 00:04:51.844 "vhost_controller_set_coalescing", 00:04:51.844 "vhost_get_controllers", 00:04:51.844 "vhost_delete_controller", 00:04:51.844 "vhost_create_blk_controller", 00:04:51.844 "vhost_scsi_controller_remove_target", 00:04:51.844 "vhost_scsi_controller_add_target", 00:04:51.844 "vhost_start_scsi_controller", 00:04:51.844 "vhost_create_scsi_controller", 00:04:51.844 "thread_set_cpumask", 00:04:51.844 "scheduler_set_options", 00:04:51.844 "framework_get_governor", 00:04:51.844 "framework_get_scheduler", 00:04:51.844 "framework_set_scheduler", 00:04:51.844 "framework_get_reactors", 00:04:51.844 "thread_get_io_channels", 00:04:51.844 "thread_get_pollers", 00:04:51.844 "thread_get_stats", 00:04:51.844 "framework_monitor_context_switch", 00:04:51.844 "spdk_kill_instance", 00:04:51.844 "log_enable_timestamps", 00:04:51.844 "log_get_flags", 00:04:51.844 "log_clear_flag", 00:04:51.844 "log_set_flag", 00:04:51.844 "log_get_level", 00:04:51.844 "log_set_level", 00:04:51.844 "log_get_print_level", 00:04:51.844 "log_set_print_level", 00:04:51.844 "framework_enable_cpumask_locks", 00:04:51.844 "framework_disable_cpumask_locks", 00:04:51.844 "framework_wait_init", 00:04:51.844 "framework_start_init", 00:04:51.844 "scsi_get_devices", 00:04:51.844 "bdev_get_histogram", 00:04:51.844 "bdev_enable_histogram", 00:04:51.844 "bdev_set_qos_limit", 00:04:51.844 "bdev_set_qd_sampling_period", 00:04:51.844 "bdev_get_bdevs", 00:04:51.844 "bdev_reset_iostat", 00:04:51.844 "bdev_get_iostat", 00:04:51.844 "bdev_examine", 00:04:51.844 "bdev_wait_for_examine", 00:04:51.844 "bdev_set_options", 00:04:51.844 "accel_get_stats", 00:04:51.844 "accel_set_options", 00:04:51.844 "accel_set_driver", 00:04:51.844 "accel_crypto_key_destroy", 00:04:51.844 "accel_crypto_keys_get", 00:04:51.844 "accel_crypto_key_create", 00:04:51.844 "accel_assign_opc", 00:04:51.844 "accel_get_module_info", 00:04:51.844 "accel_get_opc_assignments", 00:04:51.844 "vmd_rescan", 00:04:51.844 "vmd_remove_device", 00:04:51.844 "vmd_enable", 00:04:51.844 "sock_get_default_impl", 00:04:51.844 "sock_set_default_impl", 00:04:51.844 "sock_impl_set_options", 00:04:51.845 "sock_impl_get_options", 00:04:51.845 "iobuf_get_stats", 00:04:51.845 "iobuf_set_options", 00:04:51.845 "keyring_get_keys", 00:04:51.845 "framework_get_pci_devices", 00:04:51.845 "framework_get_config", 00:04:51.845 "framework_get_subsystems", 00:04:51.845 "fsdev_set_opts", 00:04:51.845 "fsdev_get_opts", 00:04:51.845 "trace_get_info", 00:04:51.845 "trace_get_tpoint_group_mask", 00:04:51.845 "trace_disable_tpoint_group", 00:04:51.845 "trace_enable_tpoint_group", 00:04:51.845 "trace_clear_tpoint_mask", 00:04:51.845 "trace_set_tpoint_mask", 00:04:51.845 "notify_get_notifications", 00:04:51.845 "notify_get_types", 00:04:51.845 "spdk_get_version", 00:04:51.845 "rpc_get_methods" 00:04:51.845 ] 00:04:51.845 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.845 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:51.845 19:54:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57942 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57942 ']' 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57942 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57942 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.845 killing process with pid 57942 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57942' 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57942 00:04:51.845 19:54:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57942 00:04:53.218 00:04:53.218 real 0m2.447s 00:04:53.218 user 0m4.429s 00:04:53.218 sys 0m0.390s 00:04:53.218 19:54:26 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.218 19:54:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.218 ************************************ 00:04:53.218 END TEST spdkcli_tcp 00:04:53.218 ************************************ 00:04:53.218 19:54:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.218 19:54:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.218 19:54:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.218 19:54:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.218 ************************************ 00:04:53.218 START TEST dpdk_mem_utility 00:04:53.218 ************************************ 00:04:53.218 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.218 * Looking for test storage... 00:04:53.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:53.218 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.218 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.218 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.218 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.218 19:54:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:53.219 19:54:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.219 19:54:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.219 19:54:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.219 19:54:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.219 --rc genhtml_branch_coverage=1 00:04:53.219 --rc genhtml_function_coverage=1 00:04:53.219 --rc genhtml_legend=1 00:04:53.219 --rc geninfo_all_blocks=1 00:04:53.219 --rc geninfo_unexecuted_blocks=1 00:04:53.219 00:04:53.219 ' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.219 --rc genhtml_branch_coverage=1 00:04:53.219 --rc genhtml_function_coverage=1 00:04:53.219 --rc genhtml_legend=1 00:04:53.219 --rc geninfo_all_blocks=1 00:04:53.219 --rc geninfo_unexecuted_blocks=1 00:04:53.219 00:04:53.219 ' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.219 --rc genhtml_branch_coverage=1 00:04:53.219 --rc genhtml_function_coverage=1 00:04:53.219 --rc genhtml_legend=1 00:04:53.219 --rc geninfo_all_blocks=1 00:04:53.219 --rc geninfo_unexecuted_blocks=1 00:04:53.219 00:04:53.219 ' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.219 --rc genhtml_branch_coverage=1 00:04:53.219 --rc genhtml_function_coverage=1 00:04:53.219 --rc genhtml_legend=1 00:04:53.219 --rc geninfo_all_blocks=1 00:04:53.219 --rc geninfo_unexecuted_blocks=1 00:04:53.219 00:04:53.219 ' 00:04:53.219 19:54:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:53.219 19:54:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58048 00:04:53.219 19:54:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58048 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58048 ']' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.219 19:54:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.219 19:54:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.219 [2024-11-19 19:54:26.956533] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:53.219 [2024-11-19 19:54:26.956662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58048 ] 00:04:53.477 [2024-11-19 19:54:27.105983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.477 [2024-11-19 19:54:27.185381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.043 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.043 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:54.043 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.043 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.043 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.043 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.043 { 00:04:54.043 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.043 } 00:04:54.043 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.043 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.043 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:54.043 1 heaps totaling size 816.000000 MiB 00:04:54.043 size: 816.000000 MiB heap id: 0 00:04:54.043 end heaps---------- 00:04:54.043 9 mempools totaling size 595.772034 MiB 00:04:54.043 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.043 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.043 size: 92.545471 MiB name: bdev_io_58048 00:04:54.043 size: 50.003479 MiB name: msgpool_58048 00:04:54.043 size: 36.509338 MiB name: fsdev_io_58048 00:04:54.043 size: 21.763794 MiB name: PDU_Pool 00:04:54.043 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.043 size: 4.133484 MiB name: evtpool_58048 00:04:54.043 size: 0.026123 MiB name: Session_Pool 00:04:54.043 end mempools------- 00:04:54.043 6 memzones totaling size 4.142822 MiB 00:04:54.043 size: 1.000366 MiB name: RG_ring_0_58048 00:04:54.043 size: 1.000366 MiB name: RG_ring_1_58048 00:04:54.043 size: 1.000366 MiB name: RG_ring_4_58048 00:04:54.043 size: 1.000366 MiB name: RG_ring_5_58048 00:04:54.043 size: 0.125366 MiB name: RG_ring_2_58048 00:04:54.043 size: 0.015991 MiB name: RG_ring_3_58048 00:04:54.043 end memzones------- 00:04:54.043 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.303 heap id: 0 total size: 816.000000 MiB number of busy elements: 317 number of free elements: 18 00:04:54.303 list of free elements. size: 16.790894 MiB 00:04:54.303 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:54.303 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:54.303 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:54.303 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:54.303 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:54.303 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:54.303 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:54.303 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:54.303 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:54.303 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:54.303 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:54.303 element at address: 0x20001ac00000 with size: 0.558533 MiB 00:04:54.303 element at address: 0x200000c00000 with size: 0.491638 MiB 00:04:54.303 element at address: 0x200018e00000 with size: 0.488220 MiB 00:04:54.303 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:54.303 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:54.303 element at address: 0x200028000000 with size: 0.391418 MiB 00:04:54.303 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:54.303 list of standard malloc elements. size: 199.288208 MiB 00:04:54.303 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:54.303 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:54.303 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:54.303 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:54.303 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:54.303 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:54.303 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:54.303 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:54.303 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:54.303 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:54.303 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:54.303 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:54.303 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8efc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:54.304 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:54.305 element at address: 0x200028064340 with size: 0.000244 MiB 00:04:54.305 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b100 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:54.305 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:54.305 list of memzone associated elements. size: 599.920898 MiB 00:04:54.305 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:54.305 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.306 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:54.306 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.306 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:54.306 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58048_0 00:04:54.306 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:54.306 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58048_0 00:04:54.306 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:54.306 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58048_0 00:04:54.306 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:54.306 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.306 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:54.306 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.306 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:54.306 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58048_0 00:04:54.306 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:54.306 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58048 00:04:54.306 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:54.306 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58048 00:04:54.306 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:54.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.306 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:54.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.306 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:54.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.306 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:54.306 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.306 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:54.306 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58048 00:04:54.306 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:54.306 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58048 00:04:54.306 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:54.306 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58048 00:04:54.306 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:54.306 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58048 00:04:54.306 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:54.306 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58048 00:04:54.306 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:54.306 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58048 00:04:54.306 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:54.306 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.306 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:54.306 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.306 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:54.306 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.306 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:54.306 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58048 00:04:54.306 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:54.306 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58048 00:04:54.306 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:54.306 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.306 element at address: 0x200028064540 with size: 0.023804 MiB 00:04:54.306 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.306 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:54.306 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58048 00:04:54.306 element at address: 0x20002806a6c0 with size: 0.002502 MiB 00:04:54.306 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.306 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:54.306 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58048 00:04:54.306 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:54.306 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58048 00:04:54.306 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:54.306 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58048 00:04:54.306 element at address: 0x20002806b200 with size: 0.000366 MiB 00:04:54.306 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.306 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.306 19:54:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58048 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58048 ']' 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58048 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58048 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.306 killing process with pid 58048 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58048' 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58048 00:04:54.306 19:54:27 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58048 00:04:55.242 00:04:55.242 real 0m2.282s 00:04:55.242 user 0m2.279s 00:04:55.242 sys 0m0.371s 00:04:55.242 19:54:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.242 ************************************ 00:04:55.242 END TEST dpdk_mem_utility 00:04:55.242 ************************************ 00:04:55.242 19:54:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.504 19:54:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.504 19:54:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.504 19:54:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.504 19:54:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.504 ************************************ 00:04:55.504 START TEST event 00:04:55.504 ************************************ 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.504 * Looking for test storage... 00:04:55.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.504 19:54:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.504 19:54:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.504 19:54:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.504 19:54:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.504 19:54:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.504 19:54:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.504 19:54:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.504 19:54:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.504 19:54:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.504 19:54:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.504 19:54:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.504 19:54:29 event -- scripts/common.sh@344 -- # case "$op" in 00:04:55.504 19:54:29 event -- scripts/common.sh@345 -- # : 1 00:04:55.504 19:54:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.504 19:54:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.504 19:54:29 event -- scripts/common.sh@365 -- # decimal 1 00:04:55.504 19:54:29 event -- scripts/common.sh@353 -- # local d=1 00:04:55.504 19:54:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.504 19:54:29 event -- scripts/common.sh@355 -- # echo 1 00:04:55.504 19:54:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.504 19:54:29 event -- scripts/common.sh@366 -- # decimal 2 00:04:55.504 19:54:29 event -- scripts/common.sh@353 -- # local d=2 00:04:55.504 19:54:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.504 19:54:29 event -- scripts/common.sh@355 -- # echo 2 00:04:55.504 19:54:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.504 19:54:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.504 19:54:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.504 19:54:29 event -- scripts/common.sh@368 -- # return 0 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.504 --rc genhtml_branch_coverage=1 00:04:55.504 --rc genhtml_function_coverage=1 00:04:55.504 --rc genhtml_legend=1 00:04:55.504 --rc geninfo_all_blocks=1 00:04:55.504 --rc geninfo_unexecuted_blocks=1 00:04:55.504 00:04:55.504 ' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.504 --rc genhtml_branch_coverage=1 00:04:55.504 --rc genhtml_function_coverage=1 00:04:55.504 --rc genhtml_legend=1 00:04:55.504 --rc geninfo_all_blocks=1 00:04:55.504 --rc geninfo_unexecuted_blocks=1 00:04:55.504 00:04:55.504 ' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.504 --rc genhtml_branch_coverage=1 00:04:55.504 --rc genhtml_function_coverage=1 00:04:55.504 --rc genhtml_legend=1 00:04:55.504 --rc geninfo_all_blocks=1 00:04:55.504 --rc geninfo_unexecuted_blocks=1 00:04:55.504 00:04:55.504 ' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.504 --rc genhtml_branch_coverage=1 00:04:55.504 --rc genhtml_function_coverage=1 00:04:55.504 --rc genhtml_legend=1 00:04:55.504 --rc geninfo_all_blocks=1 00:04:55.504 --rc geninfo_unexecuted_blocks=1 00:04:55.504 00:04:55.504 ' 00:04:55.504 19:54:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:55.504 19:54:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.504 19:54:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:55.504 19:54:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.504 19:54:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.504 ************************************ 00:04:55.504 START TEST event_perf 00:04:55.504 ************************************ 00:04:55.504 19:54:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.504 Running I/O for 1 seconds...[2024-11-19 19:54:29.243593] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:55.504 [2024-11-19 19:54:29.243775] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58134 ] 00:04:55.764 [2024-11-19 19:54:29.402854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.764 [2024-11-19 19:54:29.504191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.764 [2024-11-19 19:54:29.504717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.764 [2024-11-19 19:54:29.504354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.764 [2024-11-19 19:54:29.504824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.211 Running I/O for 1 seconds... 00:04:57.211 lcore 0: 183396 00:04:57.211 lcore 1: 183391 00:04:57.211 lcore 2: 183392 00:04:57.211 lcore 3: 183393 00:04:57.211 done. 00:04:57.211 00:04:57.211 real 0m1.439s 00:04:57.211 user 0m4.231s 00:04:57.211 sys 0m0.086s 00:04:57.211 19:54:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.211 19:54:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.211 ************************************ 00:04:57.211 END TEST event_perf 00:04:57.211 ************************************ 00:04:57.211 19:54:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:57.211 19:54:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:57.211 19:54:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.211 19:54:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.211 ************************************ 00:04:57.211 START TEST event_reactor 00:04:57.211 ************************************ 00:04:57.211 19:54:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:57.211 [2024-11-19 19:54:30.734117] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:57.211 [2024-11-19 19:54:30.734372] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58179 ] 00:04:57.211 [2024-11-19 19:54:30.890585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.211 [2024-11-19 19:54:30.973580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.588 test_start 00:04:58.588 oneshot 00:04:58.588 tick 100 00:04:58.588 tick 100 00:04:58.588 tick 250 00:04:58.588 tick 100 00:04:58.588 tick 100 00:04:58.588 tick 250 00:04:58.588 tick 100 00:04:58.588 tick 500 00:04:58.588 tick 100 00:04:58.588 tick 100 00:04:58.588 tick 250 00:04:58.588 tick 100 00:04:58.588 tick 100 00:04:58.588 test_end 00:04:58.588 00:04:58.588 real 0m1.393s 00:04:58.588 user 0m1.216s 00:04:58.588 sys 0m0.070s 00:04:58.588 19:54:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.588 ************************************ 00:04:58.588 19:54:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.588 END TEST event_reactor 00:04:58.588 ************************************ 00:04:58.588 19:54:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.588 19:54:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:58.588 19:54:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.588 19:54:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.588 ************************************ 00:04:58.588 START TEST event_reactor_perf 00:04:58.588 ************************************ 00:04:58.588 19:54:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.588 [2024-11-19 19:54:32.183893] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:58.588 [2024-11-19 19:54:32.183974] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58210 ] 00:04:58.588 [2024-11-19 19:54:32.335562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.850 [2024-11-19 19:54:32.413087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.787 test_start 00:04:59.787 test_end 00:04:59.787 Performance: 413081 events per second 00:04:59.787 00:04:59.787 real 0m1.370s 00:04:59.787 user 0m1.208s 00:04:59.787 sys 0m0.055s 00:04:59.787 19:54:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.787 ************************************ 00:04:59.787 END TEST event_reactor_perf 00:04:59.787 ************************************ 00:04:59.787 19:54:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.787 19:54:33 event -- event/event.sh@49 -- # uname -s 00:04:59.787 19:54:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:59.787 19:54:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:59.787 19:54:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.787 19:54:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.787 19:54:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.049 ************************************ 00:05:00.049 START TEST event_scheduler 00:05:00.049 ************************************ 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.049 * Looking for test storage... 00:05:00.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.049 19:54:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.049 --rc genhtml_branch_coverage=1 00:05:00.049 --rc genhtml_function_coverage=1 00:05:00.049 --rc genhtml_legend=1 00:05:00.049 --rc geninfo_all_blocks=1 00:05:00.049 --rc geninfo_unexecuted_blocks=1 00:05:00.049 00:05:00.049 ' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.049 --rc genhtml_branch_coverage=1 00:05:00.049 --rc genhtml_function_coverage=1 00:05:00.049 --rc genhtml_legend=1 00:05:00.049 --rc geninfo_all_blocks=1 00:05:00.049 --rc geninfo_unexecuted_blocks=1 00:05:00.049 00:05:00.049 ' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.049 --rc genhtml_branch_coverage=1 00:05:00.049 --rc genhtml_function_coverage=1 00:05:00.049 --rc genhtml_legend=1 00:05:00.049 --rc geninfo_all_blocks=1 00:05:00.049 --rc geninfo_unexecuted_blocks=1 00:05:00.049 00:05:00.049 ' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.049 --rc genhtml_branch_coverage=1 00:05:00.049 --rc genhtml_function_coverage=1 00:05:00.049 --rc genhtml_legend=1 00:05:00.049 --rc geninfo_all_blocks=1 00:05:00.049 --rc geninfo_unexecuted_blocks=1 00:05:00.049 00:05:00.049 ' 00:05:00.049 19:54:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.049 19:54:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58280 00:05:00.049 19:54:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.049 19:54:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58280 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58280 ']' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.049 19:54:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.049 19:54:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.049 [2024-11-19 19:54:33.774211] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:00.049 [2024-11-19 19:54:33.774347] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:05:00.309 [2024-11-19 19:54:33.931180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.309 [2024-11-19 19:54:34.015909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.309 [2024-11-19 19:54:34.016169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.309 [2024-11-19 19:54:34.016244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.309 [2024-11-19 19:54:34.016280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:00.877 19:54:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.877 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.877 POWER: Cannot set governor of lcore 0 to performance 00:05:00.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.877 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.877 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.877 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:00.877 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:00.877 POWER: Unable to set Power Management Environment for lcore 0 00:05:00.877 [2024-11-19 19:54:34.625640] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:00.877 [2024-11-19 19:54:34.625668] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:00.877 [2024-11-19 19:54:34.625686] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:00.877 [2024-11-19 19:54:34.625734] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.877 [2024-11-19 19:54:34.625752] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.877 [2024-11-19 19:54:34.625794] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.877 19:54:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.877 19:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 [2024-11-19 19:54:34.804916] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.137 19:54:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.137 19:54:34 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.137 19:54:34 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 ************************************ 00:05:01.137 START TEST scheduler_create_thread 00:05:01.137 ************************************ 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 2 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 3 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 4 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 5 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 6 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 7 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 8 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 9 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 10 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.137 19:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.705 19:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.705 00:05:01.705 real 0m0.591s 00:05:01.705 user 0m0.014s 00:05:01.705 sys 0m0.005s 00:05:01.705 19:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.705 19:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.705 ************************************ 00:05:01.705 END TEST scheduler_create_thread 00:05:01.705 ************************************ 00:05:01.705 19:54:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:01.705 19:54:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58280 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58280 ']' 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58280 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58280 00:05:01.705 killing process with pid 58280 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58280' 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58280 00:05:01.705 19:54:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58280 00:05:02.272 [2024-11-19 19:54:35.884912] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:02.840 00:05:02.840 real 0m2.873s 00:05:02.840 user 0m5.612s 00:05:02.840 sys 0m0.315s 00:05:02.840 19:54:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.840 ************************************ 00:05:02.840 END TEST event_scheduler 00:05:02.840 ************************************ 00:05:02.840 19:54:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.840 19:54:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:02.840 19:54:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:02.840 19:54:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.840 19:54:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.840 19:54:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.840 ************************************ 00:05:02.840 START TEST app_repeat 00:05:02.840 ************************************ 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:02.840 Process app_repeat pid: 58364 00:05:02.840 spdk_app_start Round 0 00:05:02.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58364 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58364' 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:02.840 19:54:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58364 /var/tmp/spdk-nbd.sock 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.840 19:54:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.840 [2024-11-19 19:54:36.550977] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:02.840 [2024-11-19 19:54:36.551063] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58364 ] 00:05:03.102 [2024-11-19 19:54:36.703918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.102 [2024-11-19 19:54:36.803118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.102 [2024-11-19 19:54:36.803232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.674 19:54:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.674 19:54:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:03.674 19:54:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.934 Malloc0 00:05:03.934 19:54:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.196 Malloc1 00:05:04.196 19:54:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.196 19:54:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.455 /dev/nbd0 00:05:04.455 19:54:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.455 19:54:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.455 1+0 records in 00:05:04.455 1+0 records out 00:05:04.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284085 s, 14.4 MB/s 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.455 19:54:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.455 19:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.455 19:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.455 19:54:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.712 /dev/nbd1 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.712 1+0 records in 00:05:04.712 1+0 records out 00:05:04.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228819 s, 17.9 MB/s 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.712 19:54:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.712 19:54:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.971 { 00:05:04.971 "nbd_device": "/dev/nbd0", 00:05:04.971 "bdev_name": "Malloc0" 00:05:04.971 }, 00:05:04.971 { 00:05:04.971 "nbd_device": "/dev/nbd1", 00:05:04.971 "bdev_name": "Malloc1" 00:05:04.971 } 00:05:04.971 ]' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.971 { 00:05:04.971 "nbd_device": "/dev/nbd0", 00:05:04.971 "bdev_name": "Malloc0" 00:05:04.971 }, 00:05:04.971 { 00:05:04.971 "nbd_device": "/dev/nbd1", 00:05:04.971 "bdev_name": "Malloc1" 00:05:04.971 } 00:05:04.971 ]' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.971 /dev/nbd1' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.971 /dev/nbd1' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.971 19:54:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.972 256+0 records in 00:05:04.972 256+0 records out 00:05:04.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634505 s, 165 MB/s 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.972 256+0 records in 00:05:04.972 256+0 records out 00:05:04.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206024 s, 50.9 MB/s 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.972 256+0 records in 00:05:04.972 256+0 records out 00:05:04.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233996 s, 44.8 MB/s 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.972 19:54:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.230 19:54:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.231 19:54:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.489 19:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.747 19:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.748 19:54:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.748 19:54:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.006 19:54:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.574 [2024-11-19 19:54:40.262069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.574 [2024-11-19 19:54:40.339411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.574 [2024-11-19 19:54:40.339525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.832 [2024-11-19 19:54:40.442711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.832 [2024-11-19 19:54:40.442775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.363 spdk_app_start Round 1 00:05:09.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.363 19:54:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.363 19:54:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.363 19:54:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58364 /var/tmp/spdk-nbd.sock 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.363 19:54:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.363 19:54:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.363 Malloc0 00:05:09.363 19:54:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.622 Malloc1 00:05:09.622 19:54:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.622 19:54:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.881 /dev/nbd0 00:05:09.881 19:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.881 19:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.881 1+0 records in 00:05:09.881 1+0 records out 00:05:09.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044745 s, 9.2 MB/s 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.881 19:54:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.881 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.881 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.881 19:54:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.139 /dev/nbd1 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.139 1+0 records in 00:05:10.139 1+0 records out 00:05:10.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019725 s, 20.8 MB/s 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.139 19:54:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.139 19:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.398 19:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.398 { 00:05:10.398 "nbd_device": "/dev/nbd0", 00:05:10.398 "bdev_name": "Malloc0" 00:05:10.398 }, 00:05:10.398 { 00:05:10.398 "nbd_device": "/dev/nbd1", 00:05:10.398 "bdev_name": "Malloc1" 00:05:10.398 } 00:05:10.398 ]' 00:05:10.398 19:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.398 19:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.398 { 00:05:10.398 "nbd_device": "/dev/nbd0", 00:05:10.398 "bdev_name": "Malloc0" 00:05:10.398 }, 00:05:10.398 { 00:05:10.398 "nbd_device": "/dev/nbd1", 00:05:10.398 "bdev_name": "Malloc1" 00:05:10.398 } 00:05:10.398 ]' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.398 /dev/nbd1' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.398 /dev/nbd1' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.398 256+0 records in 00:05:10.398 256+0 records out 00:05:10.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00673066 s, 156 MB/s 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.398 256+0 records in 00:05:10.398 256+0 records out 00:05:10.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017582 s, 59.6 MB/s 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.398 256+0 records in 00:05:10.398 256+0 records out 00:05:10.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236159 s, 44.4 MB/s 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.398 19:54:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.657 19:54:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.915 19:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.173 19:54:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.173 19:54:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.431 19:54:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.998 [2024-11-19 19:54:45.547266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.998 [2024-11-19 19:54:45.614736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.998 [2024-11-19 19:54:45.614736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.998 [2024-11-19 19:54:45.716134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.998 [2024-11-19 19:54:45.716178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.526 spdk_app_start Round 2 00:05:14.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.526 19:54:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.526 19:54:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.526 19:54:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58364 /var/tmp/spdk-nbd.sock 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.526 19:54:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.526 19:54:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.785 Malloc0 00:05:14.785 19:54:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.043 Malloc1 00:05:15.043 19:54:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.043 19:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.043 /dev/nbd0 00:05:15.301 19:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.301 19:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.301 1+0 records in 00:05:15.301 1+0 records out 00:05:15.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268253 s, 15.3 MB/s 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.301 19:54:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.301 19:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.301 19:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.301 19:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.301 /dev/nbd1 00:05:15.301 19:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.560 1+0 records in 00:05:15.560 1+0 records out 00:05:15.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255201 s, 16.1 MB/s 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.560 19:54:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.560 { 00:05:15.560 "nbd_device": "/dev/nbd0", 00:05:15.560 "bdev_name": "Malloc0" 00:05:15.560 }, 00:05:15.560 { 00:05:15.560 "nbd_device": "/dev/nbd1", 00:05:15.560 "bdev_name": "Malloc1" 00:05:15.560 } 00:05:15.560 ]' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.560 { 00:05:15.560 "nbd_device": "/dev/nbd0", 00:05:15.560 "bdev_name": "Malloc0" 00:05:15.560 }, 00:05:15.560 { 00:05:15.560 "nbd_device": "/dev/nbd1", 00:05:15.560 "bdev_name": "Malloc1" 00:05:15.560 } 00:05:15.560 ]' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.560 /dev/nbd1' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.560 /dev/nbd1' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.560 19:54:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.818 256+0 records in 00:05:15.818 256+0 records out 00:05:15.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536547 s, 195 MB/s 00:05:15.818 19:54:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.818 19:54:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.818 256+0 records in 00:05:15.818 256+0 records out 00:05:15.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016557 s, 63.3 MB/s 00:05:15.818 19:54:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.818 19:54:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.818 256+0 records in 00:05:15.818 256+0 records out 00:05:15.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188984 s, 55.5 MB/s 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.819 19:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.077 19:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.335 19:54:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.335 19:54:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.594 19:54:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.160 [2024-11-19 19:54:50.890796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.418 [2024-11-19 19:54:50.970849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.418 [2024-11-19 19:54:50.970866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.418 [2024-11-19 19:54:51.071996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.418 [2024-11-19 19:54:51.072040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.991 19:54:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58364 /var/tmp/spdk-nbd.sock 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.991 19:54:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58364 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58364 ']' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58364 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58364 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.991 killing process with pid 58364 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58364' 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58364 00:05:19.991 19:54:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58364 00:05:20.559 spdk_app_start is called in Round 0. 00:05:20.559 Shutdown signal received, stop current app iteration 00:05:20.559 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.559 spdk_app_start is called in Round 1. 00:05:20.559 Shutdown signal received, stop current app iteration 00:05:20.559 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.559 spdk_app_start is called in Round 2. 00:05:20.559 Shutdown signal received, stop current app iteration 00:05:20.559 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.559 spdk_app_start is called in Round 3. 00:05:20.559 Shutdown signal received, stop current app iteration 00:05:20.559 ************************************ 00:05:20.559 END TEST app_repeat 00:05:20.559 ************************************ 00:05:20.559 19:54:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.559 19:54:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.559 00:05:20.559 real 0m17.578s 00:05:20.559 user 0m38.500s 00:05:20.559 sys 0m2.042s 00:05:20.559 19:54:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.559 19:54:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.559 19:54:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.559 19:54:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.559 19:54:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.559 19:54:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.559 19:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.559 ************************************ 00:05:20.559 START TEST cpu_locks 00:05:20.559 ************************************ 00:05:20.559 19:54:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.559 * Looking for test storage... 00:05:20.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.559 19:54:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.559 19:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.559 19:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.560 19:54:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.560 --rc genhtml_branch_coverage=1 00:05:20.560 --rc genhtml_function_coverage=1 00:05:20.560 --rc genhtml_legend=1 00:05:20.560 --rc geninfo_all_blocks=1 00:05:20.560 --rc geninfo_unexecuted_blocks=1 00:05:20.560 00:05:20.560 ' 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.560 --rc genhtml_branch_coverage=1 00:05:20.560 --rc genhtml_function_coverage=1 00:05:20.560 --rc genhtml_legend=1 00:05:20.560 --rc geninfo_all_blocks=1 00:05:20.560 --rc geninfo_unexecuted_blocks=1 00:05:20.560 00:05:20.560 ' 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.560 --rc genhtml_branch_coverage=1 00:05:20.560 --rc genhtml_function_coverage=1 00:05:20.560 --rc genhtml_legend=1 00:05:20.560 --rc geninfo_all_blocks=1 00:05:20.560 --rc geninfo_unexecuted_blocks=1 00:05:20.560 00:05:20.560 ' 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.560 --rc genhtml_branch_coverage=1 00:05:20.560 --rc genhtml_function_coverage=1 00:05:20.560 --rc genhtml_legend=1 00:05:20.560 --rc geninfo_all_blocks=1 00:05:20.560 --rc geninfo_unexecuted_blocks=1 00:05:20.560 00:05:20.560 ' 00:05:20.560 19:54:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:20.560 19:54:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:20.560 19:54:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:20.560 19:54:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.560 19:54:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.560 ************************************ 00:05:20.560 START TEST default_locks 00:05:20.560 ************************************ 00:05:20.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58796 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58796 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58796 ']' 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.560 19:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 [2024-11-19 19:54:54.380069] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:20.819 [2024-11-19 19:54:54.380189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:05:20.819 [2024-11-19 19:54:54.535730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.077 [2024-11-19 19:54:54.624593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58796 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58796 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58796 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58796 ']' 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58796 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.645 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58796 00:05:21.903 killing process with pid 58796 00:05:21.903 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.903 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.903 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58796' 00:05:21.903 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58796 00:05:21.903 19:54:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58796 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58796 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58796 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58796 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58796 ']' 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.838 ERROR: process (pid: 58796) is no longer running 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58796) - No such process 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.838 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.839 ************************************ 00:05:22.839 END TEST default_locks 00:05:22.839 ************************************ 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.839 00:05:22.839 real 0m2.316s 00:05:22.839 user 0m2.372s 00:05:22.839 sys 0m0.411s 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.839 19:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 19:54:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.098 19:54:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.098 19:54:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.098 19:54:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 ************************************ 00:05:23.098 START TEST default_locks_via_rpc 00:05:23.098 ************************************ 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58849 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58849 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58849 ']' 00:05:23.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.098 19:54:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 [2024-11-19 19:54:56.757802] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:23.099 [2024-11-19 19:54:56.757917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58849 ] 00:05:23.356 [2024-11-19 19:54:56.913994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.356 [2024-11-19 19:54:56.989748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58849 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58849 00:05:23.922 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58849 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58849 ']' 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58849 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58849 00:05:24.180 killing process with pid 58849 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58849' 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58849 00:05:24.180 19:54:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58849 00:05:25.573 ************************************ 00:05:25.574 END TEST default_locks_via_rpc 00:05:25.574 ************************************ 00:05:25.574 00:05:25.574 real 0m2.285s 00:05:25.574 user 0m2.306s 00:05:25.574 sys 0m0.421s 00:05:25.574 19:54:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.574 19:54:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.574 19:54:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.574 19:54:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.574 19:54:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.574 19:54:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.574 ************************************ 00:05:25.574 START TEST non_locking_app_on_locked_coremask 00:05:25.574 ************************************ 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58901 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58901 /var/tmp/spdk.sock 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58901 ']' 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.574 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.575 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.575 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.575 [2024-11-19 19:54:59.107580] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:25.575 [2024-11-19 19:54:59.107696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:05:25.575 [2024-11-19 19:54:59.261138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.575 [2024-11-19 19:54:59.347523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58917 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58917 /var/tmp/spdk2.sock 00:05:26.145 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58917 ']' 00:05:26.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.146 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.146 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.146 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.146 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.146 19:54:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.403 [2024-11-19 19:55:00.000607] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:26.403 [2024-11-19 19:55:00.000898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:05:26.403 [2024-11-19 19:55:00.161721] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.403 [2024-11-19 19:55:00.161760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.661 [2024-11-19 19:55:00.322985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.594 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.594 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.594 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58901 00:05:27.594 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58901 00:05:27.594 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58901 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58901 ']' 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58901 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58901 00:05:27.853 killing process with pid 58901 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58901' 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58901 00:05:27.853 19:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58901 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58917 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58917 ']' 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58917 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58917 00:05:30.382 killing process with pid 58917 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58917' 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58917 00:05:30.382 19:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58917 00:05:31.757 00:05:31.757 real 0m6.122s 00:05:31.757 user 0m6.397s 00:05:31.757 sys 0m0.770s 00:05:31.757 19:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.757 19:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.757 ************************************ 00:05:31.757 END TEST non_locking_app_on_locked_coremask 00:05:31.757 ************************************ 00:05:31.757 19:55:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.757 19:55:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.757 19:55:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.757 19:55:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.757 ************************************ 00:05:31.757 START TEST locking_app_on_unlocked_coremask 00:05:31.757 ************************************ 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:31.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59008 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59008 /var/tmp/spdk.sock 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59008 ']' 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.757 19:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.757 [2024-11-19 19:55:05.266093] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:31.757 [2024-11-19 19:55:05.266424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:05:31.757 [2024-11-19 19:55:05.422136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.757 [2024-11-19 19:55:05.422308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.757 [2024-11-19 19:55:05.524137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59024 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59024 /var/tmp/spdk2.sock 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59024 ']' 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.691 19:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.691 [2024-11-19 19:55:06.197207] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:32.691 [2024-11-19 19:55:06.197338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59024 ] 00:05:32.691 [2024-11-19 19:55:06.371631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.949 [2024-11-19 19:55:06.574405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.372 19:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.372 19:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.372 19:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59024 00:05:34.372 19:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59024 00:05:34.372 19:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59008 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59008 ']' 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59008 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59008 00:05:34.372 killing process with pid 59008 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59008' 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59008 00:05:34.372 19:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59008 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59024 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59024 ']' 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59024 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59024 00:05:36.900 killing process with pid 59024 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59024' 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59024 00:05:36.900 19:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59024 00:05:37.834 00:05:37.834 real 0m6.418s 00:05:37.834 user 0m6.619s 00:05:37.834 sys 0m0.854s 00:05:37.834 19:55:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.834 ************************************ 00:05:37.834 END TEST locking_app_on_unlocked_coremask 00:05:37.834 ************************************ 00:05:37.834 19:55:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 19:55:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.093 19:55:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.093 19:55:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.093 19:55:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 ************************************ 00:05:38.093 START TEST locking_app_on_locked_coremask 00:05:38.093 ************************************ 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59126 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59126 /var/tmp/spdk.sock 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59126 ']' 00:05:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.093 19:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 [2024-11-19 19:55:11.732148] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:38.093 [2024-11-19 19:55:11.732283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:05:38.352 [2024-11-19 19:55:11.888610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.352 [2024-11-19 19:55:11.986768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59137 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59137 /var/tmp/spdk2.sock 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59137 /var/tmp/spdk2.sock 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59137 /var/tmp/spdk2.sock 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59137 ']' 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.918 19:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.918 [2024-11-19 19:55:12.681411] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:38.918 [2024-11-19 19:55:12.681557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59137 ] 00:05:39.176 [2024-11-19 19:55:12.871414] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59126 has claimed it. 00:05:39.176 [2024-11-19 19:55:12.871477] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59137) - No such process 00:05:39.743 ERROR: process (pid: 59137) is no longer running 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59126 ']' 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59126 00:05:39.743 killing process with pid 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59126' 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59126 00:05:39.743 19:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59126 00:05:41.125 ************************************ 00:05:41.125 END TEST locking_app_on_locked_coremask 00:05:41.125 ************************************ 00:05:41.125 00:05:41.125 real 0m3.094s 00:05:41.125 user 0m3.291s 00:05:41.125 sys 0m0.553s 00:05:41.125 19:55:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.125 19:55:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 19:55:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.125 19:55:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.125 19:55:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.125 19:55:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 ************************************ 00:05:41.125 START TEST locking_overlapped_coremask 00:05:41.125 ************************************ 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59195 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59195 /var/tmp/spdk.sock 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59195 ']' 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 19:55:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.125 [2024-11-19 19:55:14.877620] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:41.125 [2024-11-19 19:55:14.877806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:05:41.384 [2024-11-19 19:55:15.035030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.384 [2024-11-19 19:55:15.125096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.384 [2024-11-19 19:55:15.125391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.384 [2024-11-19 19:55:15.125426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59208 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59208 /var/tmp/spdk2.sock 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59208 /var/tmp/spdk2.sock 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59208 /var/tmp/spdk2.sock 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59208 ']' 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.950 19:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.207 [2024-11-19 19:55:15.801457] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:42.207 [2024-11-19 19:55:15.801954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:05:42.207 [2024-11-19 19:55:15.980487] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59195 has claimed it. 00:05:42.207 [2024-11-19 19:55:15.980549] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.772 ERROR: process (pid: 59208) is no longer running 00:05:42.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59208) - No such process 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59195 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59195 ']' 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59195 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59195 00:05:42.772 killing process with pid 59195 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59195' 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59195 00:05:42.772 19:55:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59195 00:05:44.148 ************************************ 00:05:44.148 END TEST locking_overlapped_coremask 00:05:44.148 ************************************ 00:05:44.148 00:05:44.148 real 0m2.857s 00:05:44.148 user 0m7.791s 00:05:44.148 sys 0m0.423s 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.148 19:55:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.148 19:55:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.148 19:55:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.148 19:55:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.148 ************************************ 00:05:44.148 START TEST locking_overlapped_coremask_via_rpc 00:05:44.148 ************************************ 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59261 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59261 /var/tmp/spdk.sock 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:05:44.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.148 19:55:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.148 [2024-11-19 19:55:17.789092] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:44.148 [2024-11-19 19:55:17.789209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:05:44.406 [2024-11-19 19:55:17.940957] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.406 [2024-11-19 19:55:17.940990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.406 [2024-11-19 19:55:18.019015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.406 [2024-11-19 19:55:18.019298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.406 [2024-11-19 19:55:18.019317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59273 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59273 /var/tmp/spdk2.sock 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.970 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:05:44.971 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.971 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.971 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.971 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.971 19:55:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.971 [2024-11-19 19:55:18.651668] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:44.971 [2024-11-19 19:55:18.651914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:05:45.229 [2024-11-19 19:55:18.825980] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.229 [2024-11-19 19:55:18.826021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.487 [2024-11-19 19:55:19.031214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.487 [2024-11-19 19:55:19.031353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.487 [2024-11-19 19:55:19.031370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.421 [2024-11-19 19:55:20.195355] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59261 has claimed it. 00:05:46.421 request: 00:05:46.421 { 00:05:46.421 "method": "framework_enable_cpumask_locks", 00:05:46.421 "req_id": 1 00:05:46.421 } 00:05:46.421 Got JSON-RPC error response 00:05:46.421 response: 00:05:46.421 { 00:05:46.421 "code": -32603, 00:05:46.421 "message": "Failed to claim CPU core: 2" 00:05:46.421 } 00:05:46.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59261 /var/tmp/spdk.sock 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.421 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.679 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.679 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59273 /var/tmp/spdk2.sock 00:05:46.679 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:05:46.680 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.680 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.680 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.680 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.680 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.938 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.938 00:05:46.938 real 0m2.918s 00:05:46.938 user 0m1.042s 00:05:46.939 sys 0m0.114s 00:05:46.939 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.939 19:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.939 ************************************ 00:05:46.939 END TEST locking_overlapped_coremask_via_rpc 00:05:46.939 ************************************ 00:05:46.939 19:55:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.939 19:55:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59261 ]] 00:05:46.939 19:55:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59261 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59261 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59261 00:05:46.939 killing process with pid 59261 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59261' 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59261 00:05:46.939 19:55:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59261 00:05:48.312 19:55:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59273 ]] 00:05:48.312 19:55:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59273 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59273 ']' 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59273 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59273 00:05:48.312 killing process with pid 59273 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59273' 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59273 00:05:48.312 19:55:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59273 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.247 Process with pid 59261 is not found 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59261 ]] 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59261 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59261 00:05:49.247 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59261) - No such process 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59261 is not found' 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59273 ]] 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59273 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59273 ']' 00:05:49.247 Process with pid 59273 is not found 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59273 00:05:49.247 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59273) - No such process 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59273 is not found' 00:05:49.247 19:55:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.247 ************************************ 00:05:49.247 END TEST cpu_locks 00:05:49.247 ************************************ 00:05:49.247 00:05:49.247 real 0m28.887s 00:05:49.247 user 0m49.908s 00:05:49.247 sys 0m4.322s 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.247 19:55:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.507 ************************************ 00:05:49.507 END TEST event 00:05:49.507 ************************************ 00:05:49.507 00:05:49.507 real 0m54.000s 00:05:49.507 user 1m40.807s 00:05:49.507 sys 0m7.134s 00:05:49.507 19:55:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.507 19:55:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.507 19:55:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:49.507 19:55:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.507 19:55:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.507 19:55:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.507 ************************************ 00:05:49.507 START TEST thread 00:05:49.507 ************************************ 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:49.507 * Looking for test storage... 00:05:49.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.507 19:55:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.507 19:55:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.507 19:55:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.507 19:55:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.507 19:55:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.507 19:55:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.507 19:55:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.507 19:55:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.507 19:55:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.507 19:55:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.507 19:55:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.507 19:55:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:49.507 19:55:23 thread -- scripts/common.sh@345 -- # : 1 00:05:49.507 19:55:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.507 19:55:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.507 19:55:23 thread -- scripts/common.sh@365 -- # decimal 1 00:05:49.507 19:55:23 thread -- scripts/common.sh@353 -- # local d=1 00:05:49.507 19:55:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.507 19:55:23 thread -- scripts/common.sh@355 -- # echo 1 00:05:49.507 19:55:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.507 19:55:23 thread -- scripts/common.sh@366 -- # decimal 2 00:05:49.507 19:55:23 thread -- scripts/common.sh@353 -- # local d=2 00:05:49.507 19:55:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.507 19:55:23 thread -- scripts/common.sh@355 -- # echo 2 00:05:49.507 19:55:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.507 19:55:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.507 19:55:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.507 19:55:23 thread -- scripts/common.sh@368 -- # return 0 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.507 --rc genhtml_branch_coverage=1 00:05:49.507 --rc genhtml_function_coverage=1 00:05:49.507 --rc genhtml_legend=1 00:05:49.507 --rc geninfo_all_blocks=1 00:05:49.507 --rc geninfo_unexecuted_blocks=1 00:05:49.507 00:05:49.507 ' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.507 --rc genhtml_branch_coverage=1 00:05:49.507 --rc genhtml_function_coverage=1 00:05:49.507 --rc genhtml_legend=1 00:05:49.507 --rc geninfo_all_blocks=1 00:05:49.507 --rc geninfo_unexecuted_blocks=1 00:05:49.507 00:05:49.507 ' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.507 --rc genhtml_branch_coverage=1 00:05:49.507 --rc genhtml_function_coverage=1 00:05:49.507 --rc genhtml_legend=1 00:05:49.507 --rc geninfo_all_blocks=1 00:05:49.507 --rc geninfo_unexecuted_blocks=1 00:05:49.507 00:05:49.507 ' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.507 --rc genhtml_branch_coverage=1 00:05:49.507 --rc genhtml_function_coverage=1 00:05:49.507 --rc genhtml_legend=1 00:05:49.507 --rc geninfo_all_blocks=1 00:05:49.507 --rc geninfo_unexecuted_blocks=1 00:05:49.507 00:05:49.507 ' 00:05:49.507 19:55:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.507 19:55:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.507 ************************************ 00:05:49.507 START TEST thread_poller_perf 00:05:49.507 ************************************ 00:05:49.507 19:55:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.507 [2024-11-19 19:55:23.297533] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:49.507 [2024-11-19 19:55:23.297921] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59433 ] 00:05:49.764 [2024-11-19 19:55:23.454580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.764 [2024-11-19 19:55:23.532126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.764 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:51.137 [2024-11-19T19:55:24.931Z] ====================================== 00:05:51.137 [2024-11-19T19:55:24.931Z] busy:2608984286 (cyc) 00:05:51.137 [2024-11-19T19:55:24.931Z] total_run_count: 402000 00:05:51.137 [2024-11-19T19:55:24.931Z] tsc_hz: 2600000000 (cyc) 00:05:51.137 [2024-11-19T19:55:24.932Z] ====================================== 00:05:51.138 [2024-11-19T19:55:24.932Z] poller_cost: 6490 (cyc), 2496 (nsec) 00:05:51.138 00:05:51.138 real 0m1.391s 00:05:51.138 user 0m1.217s 00:05:51.138 sys 0m0.066s 00:05:51.138 19:55:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.138 ************************************ 00:05:51.138 END TEST thread_poller_perf 00:05:51.138 ************************************ 00:05:51.138 19:55:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.138 19:55:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.138 19:55:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:51.138 19:55:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.138 19:55:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.138 ************************************ 00:05:51.138 START TEST thread_poller_perf 00:05:51.138 ************************************ 00:05:51.138 19:55:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.138 [2024-11-19 19:55:24.743744] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:51.138 [2024-11-19 19:55:24.743850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:05:51.138 [2024-11-19 19:55:24.898874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.396 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.396 [2024-11-19 19:55:24.976077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.331 [2024-11-19T19:55:26.125Z] ====================================== 00:05:52.331 [2024-11-19T19:55:26.125Z] busy:2602419172 (cyc) 00:05:52.331 [2024-11-19T19:55:26.125Z] total_run_count: 5264000 00:05:52.331 [2024-11-19T19:55:26.125Z] tsc_hz: 2600000000 (cyc) 00:05:52.331 [2024-11-19T19:55:26.125Z] ====================================== 00:05:52.331 [2024-11-19T19:55:26.125Z] poller_cost: 494 (cyc), 190 (nsec) 00:05:52.331 00:05:52.331 real 0m1.385s 00:05:52.331 user 0m1.215s 00:05:52.331 sys 0m0.064s 00:05:52.331 ************************************ 00:05:52.331 END TEST thread_poller_perf 00:05:52.331 ************************************ 00:05:52.331 19:55:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.331 19:55:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.590 19:55:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.590 ************************************ 00:05:52.590 END TEST thread 00:05:52.590 ************************************ 00:05:52.590 00:05:52.590 real 0m3.013s 00:05:52.590 user 0m2.543s 00:05:52.590 sys 0m0.244s 00:05:52.590 19:55:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.590 19:55:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.590 19:55:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:52.590 19:55:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.590 19:55:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.590 19:55:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.590 19:55:26 -- common/autotest_common.sh@10 -- # set +x 00:05:52.590 ************************************ 00:05:52.590 START TEST app_cmdline 00:05:52.590 ************************************ 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.590 * Looking for test storage... 00:05:52.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.590 19:55:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.590 19:55:26 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.591 --rc genhtml_branch_coverage=1 00:05:52.591 --rc genhtml_function_coverage=1 00:05:52.591 --rc genhtml_legend=1 00:05:52.591 --rc geninfo_all_blocks=1 00:05:52.591 --rc geninfo_unexecuted_blocks=1 00:05:52.591 00:05:52.591 ' 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.591 --rc genhtml_branch_coverage=1 00:05:52.591 --rc genhtml_function_coverage=1 00:05:52.591 --rc genhtml_legend=1 00:05:52.591 --rc geninfo_all_blocks=1 00:05:52.591 --rc geninfo_unexecuted_blocks=1 00:05:52.591 00:05:52.591 ' 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.591 --rc genhtml_branch_coverage=1 00:05:52.591 --rc genhtml_function_coverage=1 00:05:52.591 --rc genhtml_legend=1 00:05:52.591 --rc geninfo_all_blocks=1 00:05:52.591 --rc geninfo_unexecuted_blocks=1 00:05:52.591 00:05:52.591 ' 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.591 --rc genhtml_branch_coverage=1 00:05:52.591 --rc genhtml_function_coverage=1 00:05:52.591 --rc genhtml_legend=1 00:05:52.591 --rc geninfo_all_blocks=1 00:05:52.591 --rc geninfo_unexecuted_blocks=1 00:05:52.591 00:05:52.591 ' 00:05:52.591 19:55:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:52.591 19:55:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59553 00:05:52.591 19:55:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:52.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.591 19:55:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59553 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59553 ']' 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.591 19:55:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.849 [2024-11-19 19:55:26.391190] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:52.849 [2024-11-19 19:55:26.391457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:05:52.849 [2024-11-19 19:55:26.540198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.849 [2024-11-19 19:55:26.618375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.415 19:55:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.415 19:55:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:53.415 19:55:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:53.673 { 00:05:53.673 "version": "SPDK v25.01-pre git sha1 f22e807f1", 00:05:53.673 "fields": { 00:05:53.673 "major": 25, 00:05:53.673 "minor": 1, 00:05:53.673 "patch": 0, 00:05:53.673 "suffix": "-pre", 00:05:53.673 "commit": "f22e807f1" 00:05:53.673 } 00:05:53.673 } 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.673 19:55:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:53.673 19:55:27 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.931 request: 00:05:53.931 { 00:05:53.931 "method": "env_dpdk_get_mem_stats", 00:05:53.931 "req_id": 1 00:05:53.931 } 00:05:53.931 Got JSON-RPC error response 00:05:53.931 response: 00:05:53.931 { 00:05:53.931 "code": -32601, 00:05:53.931 "message": "Method not found" 00:05:53.931 } 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.931 19:55:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59553 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59553 ']' 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59553 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59553 00:05:53.931 killing process with pid 59553 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59553' 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 59553 00:05:53.931 19:55:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 59553 00:05:55.308 00:05:55.308 real 0m2.626s 00:05:55.308 user 0m2.924s 00:05:55.308 sys 0m0.380s 00:05:55.308 ************************************ 00:05:55.308 END TEST app_cmdline 00:05:55.308 ************************************ 00:05:55.308 19:55:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.308 19:55:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.308 19:55:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.308 19:55:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.308 19:55:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.308 19:55:28 -- common/autotest_common.sh@10 -- # set +x 00:05:55.308 ************************************ 00:05:55.308 START TEST version 00:05:55.308 ************************************ 00:05:55.308 19:55:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.308 * Looking for test storage... 00:05:55.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.308 19:55:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.308 19:55:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.308 19:55:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.308 19:55:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.308 19:55:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.308 19:55:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.308 19:55:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.308 19:55:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.308 19:55:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.308 19:55:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.308 19:55:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.308 19:55:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.308 19:55:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.308 19:55:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.308 19:55:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.308 19:55:28 version -- scripts/common.sh@344 -- # case "$op" in 00:05:55.308 19:55:28 version -- scripts/common.sh@345 -- # : 1 00:05:55.308 19:55:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.308 19:55:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.308 19:55:28 version -- scripts/common.sh@365 -- # decimal 1 00:05:55.308 19:55:28 version -- scripts/common.sh@353 -- # local d=1 00:05:55.308 19:55:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.308 19:55:28 version -- scripts/common.sh@355 -- # echo 1 00:05:55.308 19:55:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.308 19:55:28 version -- scripts/common.sh@366 -- # decimal 2 00:05:55.308 19:55:28 version -- scripts/common.sh@353 -- # local d=2 00:05:55.308 19:55:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.308 19:55:29 version -- scripts/common.sh@355 -- # echo 2 00:05:55.308 19:55:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.308 19:55:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.308 19:55:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.308 19:55:29 version -- scripts/common.sh@368 -- # return 0 00:05:55.308 19:55:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.308 19:55:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.308 --rc genhtml_branch_coverage=1 00:05:55.308 --rc genhtml_function_coverage=1 00:05:55.308 --rc genhtml_legend=1 00:05:55.308 --rc geninfo_all_blocks=1 00:05:55.308 --rc geninfo_unexecuted_blocks=1 00:05:55.308 00:05:55.308 ' 00:05:55.308 19:55:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.308 --rc genhtml_branch_coverage=1 00:05:55.308 --rc genhtml_function_coverage=1 00:05:55.308 --rc genhtml_legend=1 00:05:55.308 --rc geninfo_all_blocks=1 00:05:55.308 --rc geninfo_unexecuted_blocks=1 00:05:55.308 00:05:55.308 ' 00:05:55.308 19:55:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.308 --rc genhtml_branch_coverage=1 00:05:55.308 --rc genhtml_function_coverage=1 00:05:55.309 --rc genhtml_legend=1 00:05:55.309 --rc geninfo_all_blocks=1 00:05:55.309 --rc geninfo_unexecuted_blocks=1 00:05:55.309 00:05:55.309 ' 00:05:55.309 19:55:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.309 --rc genhtml_branch_coverage=1 00:05:55.309 --rc genhtml_function_coverage=1 00:05:55.309 --rc genhtml_legend=1 00:05:55.309 --rc geninfo_all_blocks=1 00:05:55.309 --rc geninfo_unexecuted_blocks=1 00:05:55.309 00:05:55.309 ' 00:05:55.309 19:55:29 version -- app/version.sh@17 -- # get_header_version major 00:05:55.309 19:55:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # cut -f2 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.309 19:55:29 version -- app/version.sh@17 -- # major=25 00:05:55.309 19:55:29 version -- app/version.sh@18 -- # get_header_version minor 00:05:55.309 19:55:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # cut -f2 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.309 19:55:29 version -- app/version.sh@18 -- # minor=1 00:05:55.309 19:55:29 version -- app/version.sh@19 -- # get_header_version patch 00:05:55.309 19:55:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # cut -f2 00:05:55.309 19:55:29 version -- app/version.sh@19 -- # patch=0 00:05:55.309 19:55:29 version -- app/version.sh@20 -- # get_header_version suffix 00:05:55.309 19:55:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # cut -f2 00:05:55.309 19:55:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.309 19:55:29 version -- app/version.sh@20 -- # suffix=-pre 00:05:55.309 19:55:29 version -- app/version.sh@22 -- # version=25.1 00:05:55.309 19:55:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:55.309 19:55:29 version -- app/version.sh@28 -- # version=25.1rc0 00:05:55.309 19:55:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:55.309 19:55:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:55.309 19:55:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:55.309 19:55:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:55.309 ************************************ 00:05:55.309 END TEST version 00:05:55.309 ************************************ 00:05:55.309 00:05:55.309 real 0m0.210s 00:05:55.309 user 0m0.129s 00:05:55.309 sys 0m0.107s 00:05:55.309 19:55:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.309 19:55:29 version -- common/autotest_common.sh@10 -- # set +x 00:05:55.567 19:55:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:55.567 19:55:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:55.567 19:55:29 -- spdk/autotest.sh@194 -- # uname -s 00:05:55.567 19:55:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:55.567 19:55:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.567 19:55:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.567 19:55:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:55.567 19:55:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:55.567 19:55:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.567 19:55:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.567 19:55:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.567 ************************************ 00:05:55.567 START TEST blockdev_nvme 00:05:55.567 ************************************ 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:55.567 * Looking for test storage... 00:05:55.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.567 19:55:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.567 --rc genhtml_branch_coverage=1 00:05:55.567 --rc genhtml_function_coverage=1 00:05:55.567 --rc genhtml_legend=1 00:05:55.567 --rc geninfo_all_blocks=1 00:05:55.567 --rc geninfo_unexecuted_blocks=1 00:05:55.567 00:05:55.567 ' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.567 --rc genhtml_branch_coverage=1 00:05:55.567 --rc genhtml_function_coverage=1 00:05:55.567 --rc genhtml_legend=1 00:05:55.567 --rc geninfo_all_blocks=1 00:05:55.567 --rc geninfo_unexecuted_blocks=1 00:05:55.567 00:05:55.567 ' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.567 --rc genhtml_branch_coverage=1 00:05:55.567 --rc genhtml_function_coverage=1 00:05:55.567 --rc genhtml_legend=1 00:05:55.567 --rc geninfo_all_blocks=1 00:05:55.567 --rc geninfo_unexecuted_blocks=1 00:05:55.567 00:05:55.567 ' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.567 --rc genhtml_branch_coverage=1 00:05:55.567 --rc genhtml_function_coverage=1 00:05:55.567 --rc genhtml_legend=1 00:05:55.567 --rc geninfo_all_blocks=1 00:05:55.567 --rc geninfo_unexecuted_blocks=1 00:05:55.567 00:05:55.567 ' 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:55.567 19:55:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59725 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59725 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59725 ']' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.567 19:55:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.567 19:55:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:55.567 [2024-11-19 19:55:29.356648] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:55.567 [2024-11-19 19:55:29.356890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:05:55.827 [2024-11-19 19:55:29.509127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.087 [2024-11-19 19:55:29.657490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.679 19:55:30 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.679 19:55:30 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:56.679 19:55:30 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:56.679 19:55:30 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:56.679 19:55:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:56.679 19:55:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:56.679 19:55:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:56.945 19:55:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:56.945 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.945 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 19:55:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.207 19:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "254777e8-5584-4b1e-9c95-b16e5139360f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "254777e8-5584-4b1e-9c95-b16e5139360f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "bca9c384-a96d-4b9e-828a-0ff5ffa26ac5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bca9c384-a96d-4b9e-828a-0ff5ffa26ac5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4389e83b-0660-4312-985d-7e18a7886b0e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4389e83b-0660-4312-985d-7e18a7886b0e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5624cca5-61b5-4c9b-a0ef-47b93ca0d269"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5624cca5-61b5-4c9b-a0ef-47b93ca0d269",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b59223b6-4880-4f63-a98e-b751a12e331d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b59223b6-4880-4f63-a98e-b751a12e331d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b2729107-2299-4e18-a28f-61fba15a2eb6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b2729107-2299-4e18-a28f-61fba15a2eb6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:57.208 19:55:30 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59725 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59725 ']' 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59725 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59725 00:05:57.208 killing process with pid 59725 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59725' 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59725 00:05:57.208 19:55:30 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59725 00:05:59.116 19:55:32 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:59.116 19:55:32 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.116 19:55:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:59.116 19:55:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.116 19:55:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.116 ************************************ 00:05:59.116 START TEST bdev_hello_world 00:05:59.116 ************************************ 00:05:59.116 19:55:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.116 [2024-11-19 19:55:32.520569] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:59.116 [2024-11-19 19:55:32.520922] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:05:59.116 [2024-11-19 19:55:32.679057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.116 [2024-11-19 19:55:32.763908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.683 [2024-11-19 19:55:33.252431] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:59.683 [2024-11-19 19:55:33.252585] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:59.683 [2024-11-19 19:55:33.252604] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:59.683 [2024-11-19 19:55:33.254518] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:59.683 [2024-11-19 19:55:33.254840] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:59.683 [2024-11-19 19:55:33.254857] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:59.683 [2024-11-19 19:55:33.255119] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:59.683 00:05:59.683 [2024-11-19 19:55:33.255140] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:00.252 00:06:00.252 real 0m1.356s 00:06:00.252 user 0m1.089s 00:06:00.252 sys 0m0.162s 00:06:00.252 19:55:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.252 19:55:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:00.252 ************************************ 00:06:00.252 END TEST bdev_hello_world 00:06:00.252 ************************************ 00:06:00.252 19:55:33 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:00.252 19:55:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:00.252 19:55:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.252 19:55:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.252 ************************************ 00:06:00.252 START TEST bdev_bounds 00:06:00.252 ************************************ 00:06:00.252 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:00.252 Process bdevio pid: 59840 00:06:00.252 19:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59840 00:06:00.252 19:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59840' 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59840 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59840 ']' 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.253 19:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:00.253 [2024-11-19 19:55:33.925210] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:00.253 [2024-11-19 19:55:33.925316] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:06:00.513 [2024-11-19 19:55:34.079886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.513 [2024-11-19 19:55:34.190595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.513 [2024-11-19 19:55:34.191049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.513 [2024-11-19 19:55:34.191178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.083 19:55:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.083 19:55:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:01.083 19:55:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:01.342 I/O targets: 00:06:01.342 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:01.342 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:01.342 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.342 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.342 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.342 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:01.342 00:06:01.342 00:06:01.342 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.342 http://cunit.sourceforge.net/ 00:06:01.342 00:06:01.342 00:06:01.342 Suite: bdevio tests on: Nvme3n1 00:06:01.342 Test: blockdev write read block ...passed 00:06:01.342 Test: blockdev write zeroes read block ...passed 00:06:01.342 Test: blockdev write zeroes read no split ...passed 00:06:01.342 Test: blockdev write zeroes read split ...passed 00:06:01.342 Test: blockdev write zeroes read split partial ...passed 00:06:01.342 Test: blockdev reset ...[2024-11-19 19:55:34.930194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:01.342 passed 00:06:01.342 Test: blockdev write read 8 blocks ...[2024-11-19 19:55:34.935354] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:01.342 passed 00:06:01.342 Test: blockdev write read size > 128k ...passed 00:06:01.342 Test: blockdev write read invalid size ...passed 00:06:01.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.342 Test: blockdev write read max offset ...passed 00:06:01.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.342 Test: blockdev writev readv 8 blocks ...passed 00:06:01.342 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.342 Test: blockdev writev readv block ...passed 00:06:01.342 Test: blockdev writev readv size > 128k ...passed 00:06:01.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.342 Test: blockdev comparev and writev ...[2024-11-19 19:55:34.955720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd20a000 len:0x1000 00:06:01.342 [2024-11-19 19:55:34.955879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.342 passed 00:06:01.342 Test: blockdev nvme passthru rw ...passed 00:06:01.342 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:34.958293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.342 [2024-11-19 19:55:34.958409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.342 passed 00:06:01.342 Test: blockdev nvme admin passthru ...passed 00:06:01.342 Test: blockdev copy ...passed 00:06:01.342 Suite: bdevio tests on: Nvme2n3 00:06:01.342 Test: blockdev write read block ...passed 00:06:01.342 Test: blockdev write zeroes read block ...passed 00:06:01.342 Test: blockdev write zeroes read no split ...passed 00:06:01.342 Test: blockdev write zeroes read split ...passed 00:06:01.342 Test: blockdev write zeroes read split partial ...passed 00:06:01.342 Test: blockdev reset ...[2024-11-19 19:55:35.017953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:01.342 [2024-11-19 19:55:35.021242] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:01.342 passed 00:06:01.342 Test: blockdev write read 8 blocks ...passed 00:06:01.342 Test: blockdev write read size > 128k ...passed 00:06:01.342 Test: blockdev write read invalid size ...passed 00:06:01.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.342 Test: blockdev write read max offset ...passed 00:06:01.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.342 Test: blockdev writev readv 8 blocks ...passed 00:06:01.342 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.342 Test: blockdev writev readv block ...passed 00:06:01.342 Test: blockdev writev readv size > 128k ...passed 00:06:01.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.342 Test: blockdev comparev and writev ...[2024-11-19 19:55:35.038637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a0406000 len:0x1000 00:06:01.342 [2024-11-19 19:55:35.038769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.342 passed 00:06:01.342 Test: blockdev nvme passthru rw ...passed 00:06:01.342 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:35.040732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.342 [2024-11-19 19:55:35.040841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.342 passed 00:06:01.342 Test: blockdev nvme admin passthru ...passed 00:06:01.342 Test: blockdev copy ...passed 00:06:01.342 Suite: bdevio tests on: Nvme2n2 00:06:01.342 Test: blockdev write read block ...passed 00:06:01.342 Test: blockdev write zeroes read block ...passed 00:06:01.342 Test: blockdev write zeroes read no split ...passed 00:06:01.342 Test: blockdev write zeroes read split ...passed 00:06:01.342 Test: blockdev write zeroes read split partial ...passed 00:06:01.342 Test: blockdev reset ...[2024-11-19 19:55:35.097502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:01.342 [2024-11-19 19:55:35.100540] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:01.342 passed 00:06:01.342 Test: blockdev write read 8 blocks ...passed 00:06:01.342 Test: blockdev write read size > 128k ...passed 00:06:01.342 Test: blockdev write read invalid size ...passed 00:06:01.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.342 Test: blockdev write read max offset ...passed 00:06:01.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.342 Test: blockdev writev readv 8 blocks ...passed 00:06:01.342 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.342 Test: blockdev writev readv block ...passed 00:06:01.342 Test: blockdev writev readv size > 128k ...passed 00:06:01.343 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.343 Test: blockdev comparev and writev ...[2024-11-19 19:55:35.108460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d143c000 len:0x1000 00:06:01.343 [2024-11-19 19:55:35.108579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.343 passed 00:06:01.343 Test: blockdev nvme passthru rw ...passed 00:06:01.343 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:35.109440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.343 [2024-11-19 19:55:35.109525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.343 passed 00:06:01.343 Test: blockdev nvme admin passthru ...passed 00:06:01.343 Test: blockdev copy ...passed 00:06:01.343 Suite: bdevio tests on: Nvme2n1 00:06:01.343 Test: blockdev write read block ...passed 00:06:01.343 Test: blockdev write zeroes read block ...passed 00:06:01.343 Test: blockdev write zeroes read no split ...passed 00:06:01.603 Test: blockdev write zeroes read split ...passed 00:06:01.603 Test: blockdev write zeroes read split partial ...passed 00:06:01.603 Test: blockdev reset ...[2024-11-19 19:55:35.178471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:01.603 [2024-11-19 19:55:35.182933] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:01.603 passed 00:06:01.603 Test: blockdev write read 8 blocks ...passed 00:06:01.603 Test: blockdev write read size > 128k ...passed 00:06:01.603 Test: blockdev write read invalid size ...passed 00:06:01.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.603 Test: blockdev write read max offset ...passed 00:06:01.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.603 Test: blockdev writev readv 8 blocks ...passed 00:06:01.603 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.603 Test: blockdev writev readv block ...passed 00:06:01.603 Test: blockdev writev readv size > 128k ...passed 00:06:01.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.603 Test: blockdev comparev and writev ...[2024-11-19 19:55:35.201254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1438000 len:0x1000 00:06:01.603 [2024-11-19 19:55:35.201387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.603 passed 00:06:01.603 Test: blockdev nvme passthru rw ...passed 00:06:01.603 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:35.203648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.603 [2024-11-19 19:55:35.203751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.603 passed 00:06:01.603 Test: blockdev nvme admin passthru ...passed 00:06:01.603 Test: blockdev copy ...passed 00:06:01.603 Suite: bdevio tests on: Nvme1n1 00:06:01.603 Test: blockdev write read block ...passed 00:06:01.603 Test: blockdev write zeroes read block ...passed 00:06:01.603 Test: blockdev write zeroes read no split ...passed 00:06:01.603 Test: blockdev write zeroes read split ...passed 00:06:01.603 Test: blockdev write zeroes read split partial ...passed 00:06:01.603 Test: blockdev reset ...[2024-11-19 19:55:35.261923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:01.603 [2024-11-19 19:55:35.266804] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:01.603 passed 00:06:01.603 Test: blockdev write read 8 blocks ...passed 00:06:01.603 Test: blockdev write read size > 128k ...passed 00:06:01.603 Test: blockdev write read invalid size ...passed 00:06:01.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.603 Test: blockdev write read max offset ...passed 00:06:01.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.603 Test: blockdev writev readv 8 blocks ...passed 00:06:01.603 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.603 Test: blockdev writev readv block ...passed 00:06:01.603 Test: blockdev writev readv size > 128k ...passed 00:06:01.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.603 Test: blockdev comparev and writev ...[2024-11-19 19:55:35.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1434000 len:0x1000 00:06:01.603 [2024-11-19 19:55:35.283887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.603 passed 00:06:01.603 Test: blockdev nvme passthru rw ...passed 00:06:01.603 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:35.284793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.603 [2024-11-19 19:55:35.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.603 passed 00:06:01.603 Test: blockdev nvme admin passthru ...passed 00:06:01.603 Test: blockdev copy ...passed 00:06:01.603 Suite: bdevio tests on: Nvme0n1 00:06:01.603 Test: blockdev write read block ...passed 00:06:01.603 Test: blockdev write zeroes read block ...passed 00:06:01.603 Test: blockdev write zeroes read no split ...passed 00:06:01.603 Test: blockdev write zeroes read split ...passed 00:06:01.603 Test: blockdev write zeroes read split partial ...passed 00:06:01.603 Test: blockdev reset ...[2024-11-19 19:55:35.345213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:01.603 [2024-11-19 19:55:35.349691] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:01.603 passed 00:06:01.603 Test: blockdev write read 8 blocks ...passed 00:06:01.603 Test: blockdev write read size > 128k ...passed 00:06:01.603 Test: blockdev write read invalid size ...passed 00:06:01.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.603 Test: blockdev write read max offset ...passed 00:06:01.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.603 Test: blockdev writev readv 8 blocks ...passed 00:06:01.603 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.603 Test: blockdev writev readv block ...passed 00:06:01.603 Test: blockdev writev readv size > 128k ...passed 00:06:01.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.603 Test: blockdev comparev and writev ...passed 00:06:01.603 Test: blockdev nvme passthru rw ...[2024-11-19 19:55:35.362101] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:01.603 separate metadata which is not supported yet. 00:06:01.603 passed 00:06:01.604 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:55:35.362918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:01.604 [2024-11-19 19:55:35.362965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:01.604 passed 00:06:01.604 Test: blockdev nvme admin passthru ...passed 00:06:01.604 Test: blockdev copy ...passed 00:06:01.604 00:06:01.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.604 suites 6 6 n/a 0 0 00:06:01.604 tests 138 138 138 0 0 00:06:01.604 asserts 893 893 893 0 n/a 00:06:01.604 00:06:01.604 Elapsed time = 1.247 seconds 00:06:01.604 0 00:06:01.604 19:55:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59840 00:06:01.604 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59840 ']' 00:06:01.604 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59840 00:06:01.604 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59840 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.865 killing process with pid 59840 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59840' 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59840 00:06:01.865 19:55:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59840 00:06:02.433 19:55:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:02.433 00:06:02.433 real 0m2.280s 00:06:02.433 user 0m5.749s 00:06:02.433 sys 0m0.330s 00:06:02.433 19:55:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.433 ************************************ 00:06:02.433 END TEST bdev_bounds 00:06:02.433 ************************************ 00:06:02.433 19:55:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:02.433 19:55:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:02.433 19:55:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:02.433 19:55:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.433 19:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.433 ************************************ 00:06:02.433 START TEST bdev_nbd 00:06:02.433 ************************************ 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59900 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59900 /var/tmp/spdk-nbd.sock 00:06:02.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59900 ']' 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.433 19:55:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:02.695 [2024-11-19 19:55:36.295097] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:02.695 [2024-11-19 19:55:36.295251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.695 [2024-11-19 19:55:36.454326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.957 [2024-11-19 19:55:36.591479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:03.527 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:03.789 1+0 records in 00:06:03.789 1+0 records out 00:06:03.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117217 s, 3.5 MB/s 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:03.789 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.048 1+0 records in 00:06:04.048 1+0 records out 00:06:04.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078167 s, 5.2 MB/s 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.048 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.306 1+0 records in 00:06:04.306 1+0 records out 00:06:04.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109264 s, 3.7 MB/s 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.306 19:55:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.568 1+0 records in 00:06:04.568 1+0 records out 00:06:04.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745358 s, 5.5 MB/s 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.568 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.830 1+0 records in 00:06:04.830 1+0 records out 00:06:04.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127047 s, 3.2 MB/s 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.830 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.090 1+0 records in 00:06:05.090 1+0 records out 00:06:05.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145351 s, 2.8 MB/s 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.090 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd0", 00:06:05.351 "bdev_name": "Nvme0n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd1", 00:06:05.351 "bdev_name": "Nvme1n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd2", 00:06:05.351 "bdev_name": "Nvme2n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd3", 00:06:05.351 "bdev_name": "Nvme2n2" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd4", 00:06:05.351 "bdev_name": "Nvme2n3" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd5", 00:06:05.351 "bdev_name": "Nvme3n1" 00:06:05.351 } 00:06:05.351 ]' 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd0", 00:06:05.351 "bdev_name": "Nvme0n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd1", 00:06:05.351 "bdev_name": "Nvme1n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd2", 00:06:05.351 "bdev_name": "Nvme2n1" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd3", 00:06:05.351 "bdev_name": "Nvme2n2" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd4", 00:06:05.351 "bdev_name": "Nvme2n3" 00:06:05.351 }, 00:06:05.351 { 00:06:05.351 "nbd_device": "/dev/nbd5", 00:06:05.351 "bdev_name": "Nvme3n1" 00:06:05.351 } 00:06:05.351 ]' 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.351 19:55:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.614 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.876 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.139 19:55:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.401 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.663 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.924 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.925 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:07.186 /dev/nbd0 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.186 1+0 records in 00:06:07.186 1+0 records out 00:06:07.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977073 s, 4.2 MB/s 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.186 19:55:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:07.447 /dev/nbd1 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.447 1+0 records in 00:06:07.447 1+0 records out 00:06:07.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105879 s, 3.9 MB/s 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.447 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:07.709 /dev/nbd10 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.709 1+0 records in 00:06:07.709 1+0 records out 00:06:07.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012355 s, 3.3 MB/s 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.709 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:07.971 /dev/nbd11 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.971 1+0 records in 00:06:07.971 1+0 records out 00:06:07.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133851 s, 3.1 MB/s 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.971 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:08.232 /dev/nbd12 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.232 1+0 records in 00:06:08.232 1+0 records out 00:06:08.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105162 s, 3.9 MB/s 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.232 19:55:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:08.495 /dev/nbd13 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.495 1+0 records in 00:06:08.495 1+0 records out 00:06:08.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124288 s, 3.3 MB/s 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.495 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd0", 00:06:08.756 "bdev_name": "Nvme0n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd1", 00:06:08.756 "bdev_name": "Nvme1n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd10", 00:06:08.756 "bdev_name": "Nvme2n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd11", 00:06:08.756 "bdev_name": "Nvme2n2" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd12", 00:06:08.756 "bdev_name": "Nvme2n3" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd13", 00:06:08.756 "bdev_name": "Nvme3n1" 00:06:08.756 } 00:06:08.756 ]' 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd0", 00:06:08.756 "bdev_name": "Nvme0n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd1", 00:06:08.756 "bdev_name": "Nvme1n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd10", 00:06:08.756 "bdev_name": "Nvme2n1" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd11", 00:06:08.756 "bdev_name": "Nvme2n2" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd12", 00:06:08.756 "bdev_name": "Nvme2n3" 00:06:08.756 }, 00:06:08.756 { 00:06:08.756 "nbd_device": "/dev/nbd13", 00:06:08.756 "bdev_name": "Nvme3n1" 00:06:08.756 } 00:06:08.756 ]' 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.756 /dev/nbd1 00:06:08.756 /dev/nbd10 00:06:08.756 /dev/nbd11 00:06:08.756 /dev/nbd12 00:06:08.756 /dev/nbd13' 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.756 /dev/nbd1 00:06:08.756 /dev/nbd10 00:06:08.756 /dev/nbd11 00:06:08.756 /dev/nbd12 00:06:08.756 /dev/nbd13' 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.756 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:08.757 256+0 records in 00:06:08.757 256+0 records out 00:06:08.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103401 s, 101 MB/s 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.757 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.018 256+0 records in 00:06:09.018 256+0 records out 00:06:09.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238343 s, 4.4 MB/s 00:06:09.018 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.018 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.280 256+0 records in 00:06:09.280 256+0 records out 00:06:09.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216197 s, 4.9 MB/s 00:06:09.280 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.280 19:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:09.280 256+0 records in 00:06:09.280 256+0 records out 00:06:09.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191563 s, 5.5 MB/s 00:06:09.280 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.280 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:09.542 256+0 records in 00:06:09.543 256+0 records out 00:06:09.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240669 s, 4.4 MB/s 00:06:09.543 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.543 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:09.804 256+0 records in 00:06:09.804 256+0 records out 00:06:09.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232931 s, 4.5 MB/s 00:06:09.804 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.804 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:10.065 256+0 records in 00:06:10.065 256+0 records out 00:06:10.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231476 s, 4.5 MB/s 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.065 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.327 19:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.327 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.327 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.327 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.327 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.600 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.893 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.153 19:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.414 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.675 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.676 19:55:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:11.676 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.676 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:11.676 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:11.937 malloc_lvol_verify 00:06:11.937 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:11.937 f90c3065-0e5a-4326-ac80-9ee6e6e7836e 00:06:11.937 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:12.198 5268b154-d545-4611-9bde-9473e5c9706e 00:06:12.198 19:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:12.460 /dev/nbd0 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:12.460 mke2fs 1.47.0 (5-Feb-2023) 00:06:12.460 Discarding device blocks: 0/4096 done 00:06:12.460 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:12.460 00:06:12.460 Allocating group tables: 0/1 done 00:06:12.460 Writing inode tables: 0/1 done 00:06:12.460 Creating journal (1024 blocks): done 00:06:12.460 Writing superblocks and filesystem accounting information: 0/1 done 00:06:12.460 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.460 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59900 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59900 ']' 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59900 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59900 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.721 killing process with pid 59900 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59900' 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59900 00:06:12.721 19:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59900 00:06:13.663 ************************************ 00:06:13.663 END TEST bdev_nbd 00:06:13.663 ************************************ 00:06:13.663 19:55:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:13.663 00:06:13.663 real 0m11.218s 00:06:13.663 user 0m15.171s 00:06:13.663 sys 0m3.749s 00:06:13.664 19:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.664 19:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 skipping fio tests on NVMe due to multi-ns failures. 00:06:13.925 19:55:47 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:13.925 19:55:47 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:13.925 19:55:47 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:13.925 19:55:47 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.925 19:55:47 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.925 19:55:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:13.925 19:55:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.925 19:55:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 ************************************ 00:06:13.925 START TEST bdev_verify 00:06:13.925 ************************************ 00:06:13.925 19:55:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.925 [2024-11-19 19:55:47.578696] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:13.925 [2024-11-19 19:55:47.578849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:06:14.186 [2024-11-19 19:55:47.748345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.186 [2024-11-19 19:55:47.873480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.186 [2024-11-19 19:55:47.873576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.759 Running I/O for 5 seconds... 00:06:17.090 17792.00 IOPS, 69.50 MiB/s [2024-11-19T19:55:51.829Z] 19104.00 IOPS, 74.62 MiB/s [2024-11-19T19:55:52.773Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-19T19:55:53.717Z] 18912.00 IOPS, 73.88 MiB/s [2024-11-19T19:55:53.717Z] 19046.40 IOPS, 74.40 MiB/s 00:06:19.923 Latency(us) 00:06:19.923 [2024-11-19T19:55:53.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.923 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0xbd0bd 00:06:19.923 Nvme0n1 : 5.07 1566.80 6.12 0.00 0.00 81487.23 18955.03 89128.96 00:06:19.923 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:19.923 Nvme0n1 : 5.08 1585.97 6.20 0.00 0.00 80471.35 18350.08 87515.77 00:06:19.923 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0xa0000 00:06:19.923 Nvme1n1 : 5.07 1566.34 6.12 0.00 0.00 81336.75 18652.55 78643.20 00:06:19.923 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0xa0000 length 0xa0000 00:06:19.923 Nvme1n1 : 5.09 1585.56 6.19 0.00 0.00 80383.19 18955.03 80659.69 00:06:19.923 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0x80000 00:06:19.923 Nvme2n1 : 5.07 1565.41 6.11 0.00 0.00 81181.75 19963.27 70577.23 00:06:19.923 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x80000 length 0x80000 00:06:19.923 Nvme2n1 : 5.09 1585.14 6.19 0.00 0.00 80037.87 20366.57 64527.75 00:06:19.923 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0x80000 00:06:19.923 Nvme2n2 : 5.07 1565.00 6.11 0.00 0.00 81032.79 20265.75 64931.05 00:06:19.923 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x80000 length 0x80000 00:06:19.923 Nvme2n2 : 5.09 1584.23 6.19 0.00 0.00 79854.29 20971.52 64527.75 00:06:19.923 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0x80000 00:06:19.923 Nvme2n3 : 5.07 1564.52 6.11 0.00 0.00 80931.78 20265.75 66140.95 00:06:19.923 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x80000 length 0x80000 00:06:19.923 Nvme2n3 : 5.09 1583.82 6.19 0.00 0.00 79743.63 18551.73 65334.35 00:06:19.923 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x0 length 0x20000 00:06:19.923 Nvme3n1 : 5.07 1564.12 6.11 0.00 0.00 80819.17 17442.66 66140.95 00:06:19.923 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.923 Verification LBA range: start 0x20000 length 0x20000 00:06:19.923 Nvme3n1 : 5.09 1583.41 6.19 0.00 0.00 79685.03 15627.82 65737.65 00:06:19.923 [2024-11-19T19:55:53.717Z] =================================================================================================================== 00:06:19.923 [2024-11-19T19:55:53.717Z] Total : 18900.32 73.83 0.00 0.00 80575.99 15627.82 89128.96 00:06:21.336 00:06:21.336 real 0m7.293s 00:06:21.336 user 0m13.511s 00:06:21.336 sys 0m0.282s 00:06:21.337 19:55:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.337 ************************************ 00:06:21.337 END TEST bdev_verify 00:06:21.337 ************************************ 00:06:21.337 19:55:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 19:55:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:21.337 19:55:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:21.337 19:55:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.337 19:55:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 ************************************ 00:06:21.337 START TEST bdev_verify_big_io 00:06:21.337 ************************************ 00:06:21.337 19:55:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:21.337 [2024-11-19 19:55:54.925768] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:21.337 [2024-11-19 19:55:54.925888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:06:21.337 [2024-11-19 19:55:55.088573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.598 [2024-11-19 19:55:55.189529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.598 [2024-11-19 19:55:55.189604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.170 Running I/O for 5 seconds... 00:06:27.379 608.00 IOPS, 38.00 MiB/s [2024-11-19T19:56:01.739Z] 2286.50 IOPS, 142.91 MiB/s [2024-11-19T19:56:01.996Z] 2195.00 IOPS, 137.19 MiB/s [2024-11-19T19:56:01.996Z] 2340.50 IOPS, 146.28 MiB/s 00:06:28.202 Latency(us) 00:06:28.202 [2024-11-19T19:56:01.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.202 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0x0 length 0xbd0b 00:06:28.202 Nvme0n1 : 5.59 94.83 5.93 0.00 0.00 1270647.83 25508.63 1626099.40 00:06:28.202 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:28.202 Nvme0n1 : 5.71 137.53 8.60 0.00 0.00 869303.60 25609.45 1122782.92 00:06:28.202 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0x0 length 0xa000 00:06:28.202 Nvme1n1 : 5.80 106.98 6.69 0.00 0.00 1122129.10 48799.11 1555118.87 00:06:28.202 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0xa000 length 0xa000 00:06:28.202 Nvme1n1 : 5.71 133.45 8.34 0.00 0.00 882754.17 59688.17 1361535.61 00:06:28.202 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0x0 length 0x8000 00:06:28.202 Nvme2n1 : 5.81 106.44 6.65 0.00 0.00 1087867.91 48799.11 1445421.69 00:06:28.202 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0x8000 length 0x8000 00:06:28.202 Nvme2n1 : 5.87 144.52 9.03 0.00 0.00 796148.73 60898.07 1393799.48 00:06:28.202 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.202 Verification LBA range: start 0x0 length 0x8000 00:06:28.203 Nvme2n2 : 5.81 110.19 6.89 0.00 0.00 1025539.47 93968.54 1380893.93 00:06:28.203 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.203 Verification LBA range: start 0x8000 length 0x8000 00:06:28.203 Nvme2n2 : 5.94 147.06 9.19 0.00 0.00 753307.71 65737.65 1419610.58 00:06:28.203 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.203 Verification LBA range: start 0x0 length 0x8000 00:06:28.203 Nvme2n3 : 5.87 119.90 7.49 0.00 0.00 917813.96 12048.54 1522854.99 00:06:28.203 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.203 Verification LBA range: start 0x8000 length 0x8000 00:06:28.203 Nvme2n3 : 5.98 156.23 9.76 0.00 0.00 689681.66 14720.39 1445421.69 00:06:28.203 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.203 Verification LBA range: start 0x0 length 0x2000 00:06:28.203 Nvme3n1 : 5.94 140.04 8.75 0.00 0.00 765177.92 598.65 1664816.05 00:06:28.203 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.203 Verification LBA range: start 0x2000 length 0x2000 00:06:28.203 Nvme3n1 : 6.03 201.33 12.58 0.00 0.00 523043.04 2356.78 819502.47 00:06:28.203 [2024-11-19T19:56:01.997Z] =================================================================================================================== 00:06:28.203 [2024-11-19T19:56:01.997Z] Total : 1598.51 99.91 0.00 0.00 849091.64 598.65 1664816.05 00:06:29.578 00:06:29.578 real 0m8.454s 00:06:29.578 user 0m15.975s 00:06:29.578 sys 0m0.232s 00:06:29.578 19:56:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.578 19:56:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:29.578 ************************************ 00:06:29.578 END TEST bdev_verify_big_io 00:06:29.578 ************************************ 00:06:29.578 19:56:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:29.579 19:56:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:29.579 19:56:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.579 19:56:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 START TEST bdev_write_zeroes 00:06:29.579 ************************************ 00:06:29.579 19:56:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:29.837 [2024-11-19 19:56:03.421381] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:29.837 [2024-11-19 19:56:03.421493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:06:29.837 [2024-11-19 19:56:03.582207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.135 [2024-11-19 19:56:03.679819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.709 Running I/O for 1 seconds... 00:06:31.647 68352.00 IOPS, 267.00 MiB/s 00:06:31.647 Latency(us) 00:06:31.647 [2024-11-19T19:56:05.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.647 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme0n1 : 1.02 11372.49 44.42 0.00 0.00 11233.03 5520.15 24500.38 00:06:31.647 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme1n1 : 1.02 11359.45 44.37 0.00 0.00 11231.03 8872.57 20769.87 00:06:31.647 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme2n1 : 1.02 11346.60 44.32 0.00 0.00 11203.40 8620.50 21475.64 00:06:31.647 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme2n2 : 1.02 11333.66 44.27 0.00 0.00 11200.38 8721.33 21778.12 00:06:31.647 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme2n3 : 1.02 11320.87 44.22 0.00 0.00 11179.88 7057.72 20769.87 00:06:31.647 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.647 Nvme3n1 : 1.02 11308.13 44.17 0.00 0.00 11172.86 6452.78 21173.17 00:06:31.647 [2024-11-19T19:56:05.441Z] =================================================================================================================== 00:06:31.647 [2024-11-19T19:56:05.441Z] Total : 68041.19 265.79 0.00 0.00 11203.43 5520.15 24500.38 00:06:32.590 00:06:32.590 real 0m2.715s 00:06:32.590 user 0m2.419s 00:06:32.590 sys 0m0.179s 00:06:32.590 19:56:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.590 ************************************ 00:06:32.590 END TEST bdev_write_zeroes 00:06:32.590 19:56:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:32.590 ************************************ 00:06:32.590 19:56:06 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.590 19:56:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:32.590 19:56:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.590 19:56:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.590 ************************************ 00:06:32.590 START TEST bdev_json_nonenclosed 00:06:32.590 ************************************ 00:06:32.590 19:56:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.590 [2024-11-19 19:56:06.225308] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:32.590 [2024-11-19 19:56:06.225445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60549 ] 00:06:32.851 [2024-11-19 19:56:06.390988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.851 [2024-11-19 19:56:06.526146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.851 [2024-11-19 19:56:06.526298] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:32.851 [2024-11-19 19:56:06.526327] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:32.851 [2024-11-19 19:56:06.526342] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.112 00:06:33.112 real 0m0.579s 00:06:33.112 user 0m0.354s 00:06:33.112 sys 0m0.118s 00:06:33.112 19:56:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.112 19:56:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:33.112 ************************************ 00:06:33.112 END TEST bdev_json_nonenclosed 00:06:33.112 ************************************ 00:06:33.112 19:56:06 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.112 19:56:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.112 19:56:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.112 19:56:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.112 ************************************ 00:06:33.112 START TEST bdev_json_nonarray 00:06:33.112 ************************************ 00:06:33.112 19:56:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.112 [2024-11-19 19:56:06.859049] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:33.112 [2024-11-19 19:56:06.859197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60575 ] 00:06:33.373 [2024-11-19 19:56:07.020472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.373 [2024-11-19 19:56:07.153405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.373 [2024-11-19 19:56:07.153525] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:33.373 [2024-11-19 19:56:07.153546] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.373 [2024-11-19 19:56:07.153556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.633 00:06:33.633 real 0m0.565s 00:06:33.633 user 0m0.347s 00:06:33.633 sys 0m0.112s 00:06:33.633 19:56:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.633 19:56:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:33.633 ************************************ 00:06:33.633 END TEST bdev_json_nonarray 00:06:33.633 ************************************ 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:33.633 19:56:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:33.633 00:06:33.633 real 0m38.275s 00:06:33.633 user 0m57.950s 00:06:33.633 sys 0m6.034s 00:06:33.633 19:56:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.633 19:56:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.633 ************************************ 00:06:33.633 END TEST blockdev_nvme 00:06:33.633 ************************************ 00:06:33.892 19:56:07 -- spdk/autotest.sh@209 -- # uname -s 00:06:33.892 19:56:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:33.892 19:56:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:33.892 19:56:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.892 19:56:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.892 19:56:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.892 ************************************ 00:06:33.892 START TEST blockdev_nvme_gpt 00:06:33.892 ************************************ 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:33.892 * Looking for test storage... 00:06:33.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.892 19:56:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.892 --rc genhtml_branch_coverage=1 00:06:33.892 --rc genhtml_function_coverage=1 00:06:33.892 --rc genhtml_legend=1 00:06:33.892 --rc geninfo_all_blocks=1 00:06:33.892 --rc geninfo_unexecuted_blocks=1 00:06:33.892 00:06:33.892 ' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.892 --rc genhtml_branch_coverage=1 00:06:33.892 --rc genhtml_function_coverage=1 00:06:33.892 --rc genhtml_legend=1 00:06:33.892 --rc geninfo_all_blocks=1 00:06:33.892 --rc geninfo_unexecuted_blocks=1 00:06:33.892 00:06:33.892 ' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.892 --rc genhtml_branch_coverage=1 00:06:33.892 --rc genhtml_function_coverage=1 00:06:33.892 --rc genhtml_legend=1 00:06:33.892 --rc geninfo_all_blocks=1 00:06:33.892 --rc geninfo_unexecuted_blocks=1 00:06:33.892 00:06:33.892 ' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.892 --rc genhtml_branch_coverage=1 00:06:33.892 --rc genhtml_function_coverage=1 00:06:33.892 --rc genhtml_legend=1 00:06:33.892 --rc geninfo_all_blocks=1 00:06:33.892 --rc geninfo_unexecuted_blocks=1 00:06:33.892 00:06:33.892 ' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60653 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60653 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60653 ']' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.892 19:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:33.892 19:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.892 [2024-11-19 19:56:07.675401] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:33.892 [2024-11-19 19:56:07.675516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:06:34.151 [2024-11-19 19:56:07.835436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.151 [2024-11-19 19:56:07.936896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.091 19:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.091 19:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:35.091 19:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:35.091 19:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:35.091 19:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.352 Waiting for block devices as requested 00:06:35.352 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.622 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.622 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.622 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.900 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:40.900 BYT; 00:06:40.900 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:40.900 BYT; 00:06:40.900 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.900 19:56:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.900 19:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:41.833 The operation has completed successfully. 00:06:41.833 19:56:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:42.764 The operation has completed successfully. 00:06:42.764 19:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.587 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.845 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:43.845 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.845 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.845 [] 00:06:43.845 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.845 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:43.845 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.845 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:44.103 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.103 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.362 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.362 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:44.362 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:44.363 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "843cbdc8-e4f1-491f-8c99-ed684ecc7586"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "843cbdc8-e4f1-491f-8c99-ed684ecc7586",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "51683fe6-c321-420f-9fcf-58d5e2568104"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "51683fe6-c321-420f-9fcf-58d5e2568104",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5517f99b-6b8c-4dd3-aae9-9f0efa055e3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5517f99b-6b8c-4dd3-aae9-9f0efa055e3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cb176077-eac4-4d51-841b-806f9ca17642"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb176077-eac4-4d51-841b-806f9ca17642",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a0665645-4644-4bf4-a197-e7562af34476"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a0665645-4644-4bf4-a197-e7562af34476",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:44.363 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:44.363 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:44.363 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:44.363 19:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60653 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60653 ']' 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60653 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60653 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.363 killing process with pid 60653 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60653' 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60653 00:06:44.363 19:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60653 00:06:45.751 19:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:45.751 19:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.751 19:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:45.751 19:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.751 19:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.751 ************************************ 00:06:45.751 START TEST bdev_hello_world 00:06:45.751 ************************************ 00:06:45.751 19:56:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.751 [2024-11-19 19:56:19.205727] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:45.751 [2024-11-19 19:56:19.205838] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61278 ] 00:06:45.751 [2024-11-19 19:56:19.361161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.751 [2024-11-19 19:56:19.441972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.331 [2024-11-19 19:56:19.938072] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:46.331 [2024-11-19 19:56:19.938114] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:46.331 [2024-11-19 19:56:19.938136] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:46.331 [2024-11-19 19:56:19.940513] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:46.331 [2024-11-19 19:56:19.941518] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:46.331 [2024-11-19 19:56:19.941547] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:46.331 [2024-11-19 19:56:19.942342] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:46.331 00:06:46.331 [2024-11-19 19:56:19.942369] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:46.902 00:06:46.902 real 0m1.511s 00:06:46.902 user 0m1.236s 00:06:46.902 sys 0m0.169s 00:06:46.902 19:56:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.902 19:56:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:46.902 ************************************ 00:06:46.902 END TEST bdev_hello_world 00:06:46.902 ************************************ 00:06:47.162 19:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:47.162 19:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.162 19:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.162 19:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 ************************************ 00:06:47.162 START TEST bdev_bounds 00:06:47.162 ************************************ 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:47.162 Process bdevio pid: 61314 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61314 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61314' 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61314 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61314 ']' 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.162 19:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 [2024-11-19 19:56:20.783181] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:47.162 [2024-11-19 19:56:20.783312] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:06:47.162 [2024-11-19 19:56:20.943701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.423 [2024-11-19 19:56:21.049567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.423 [2024-11-19 19:56:21.049872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.423 [2024-11-19 19:56:21.049984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.997 19:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.997 19:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:47.997 19:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:47.997 I/O targets: 00:06:47.997 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:47.997 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:47.997 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:47.997 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:47.997 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:47.997 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:47.997 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:47.997 00:06:47.997 00:06:47.997 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.997 http://cunit.sourceforge.net/ 00:06:47.997 00:06:47.997 00:06:47.997 Suite: bdevio tests on: Nvme3n1 00:06:47.997 Test: blockdev write read block ...passed 00:06:47.997 Test: blockdev write zeroes read block ...passed 00:06:47.997 Test: blockdev write zeroes read no split ...passed 00:06:47.997 Test: blockdev write zeroes read split ...passed 00:06:47.997 Test: blockdev write zeroes read split partial ...passed 00:06:47.997 Test: blockdev reset ...[2024-11-19 19:56:21.770770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:47.997 [2024-11-19 19:56:21.773854] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:47.997 passed 00:06:47.997 Test: blockdev write read 8 blocks ...passed 00:06:47.997 Test: blockdev write read size > 128k ...passed 00:06:47.997 Test: blockdev write read invalid size ...passed 00:06:47.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:47.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:47.997 Test: blockdev write read max offset ...passed 00:06:47.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:47.997 Test: blockdev writev readv 8 blocks ...passed 00:06:47.997 Test: blockdev writev readv 30 x 1block ...passed 00:06:47.997 Test: blockdev writev readv block ...passed 00:06:47.997 Test: blockdev writev readv size > 128k ...passed 00:06:47.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:47.997 Test: blockdev comparev and writev ...[2024-11-19 19:56:21.781541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0804000 len:0x1000 00:06:47.997 [2024-11-19 19:56:21.781598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:47.997 passed 00:06:47.997 Test: blockdev nvme passthru rw ...passed 00:06:47.997 Test: blockdev nvme passthru vendor specific ...passed 00:06:47.997 Test: blockdev nvme admin passthru ...[2024-11-19 19:56:21.783141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:47.997 [2024-11-19 19:56:21.783194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.259 passed 00:06:48.259 Test: blockdev copy ...passed 00:06:48.259 Suite: bdevio tests on: Nvme2n3 00:06:48.259 Test: blockdev write read block ...passed 00:06:48.259 Test: blockdev write zeroes read block ...passed 00:06:48.260 Test: blockdev write zeroes read no split ...passed 00:06:48.260 Test: blockdev write zeroes read split ...passed 00:06:48.260 Test: blockdev write zeroes read split partial ...passed 00:06:48.260 Test: blockdev reset ...[2024-11-19 19:56:21.844176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.260 [2024-11-19 19:56:21.848654] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.260 passed 00:06:48.260 Test: blockdev write read 8 blocks ...passed 00:06:48.260 Test: blockdev write read size > 128k ...passed 00:06:48.260 Test: blockdev write read invalid size ...passed 00:06:48.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.260 Test: blockdev write read max offset ...passed 00:06:48.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.260 Test: blockdev writev readv 8 blocks ...passed 00:06:48.260 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.260 Test: blockdev writev readv block ...passed 00:06:48.260 Test: blockdev writev readv size > 128k ...passed 00:06:48.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.260 Test: blockdev comparev and writev ...[2024-11-19 19:56:21.868236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0802000 len:0x1000 00:06:48.260 [2024-11-19 19:56:21.868302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev nvme passthru rw ...passed 00:06:48.260 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:56:21.870817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.260 [2024-11-19 19:56:21.870868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev nvme admin passthru ...passed 00:06:48.260 Test: blockdev copy ...passed 00:06:48.260 Suite: bdevio tests on: Nvme2n2 00:06:48.260 Test: blockdev write read block ...passed 00:06:48.260 Test: blockdev write zeroes read block ...passed 00:06:48.260 Test: blockdev write zeroes read no split ...passed 00:06:48.260 Test: blockdev write zeroes read split ...passed 00:06:48.260 Test: blockdev write zeroes read split partial ...passed 00:06:48.260 Test: blockdev reset ...[2024-11-19 19:56:21.929521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.260 passed 00:06:48.260 Test: blockdev write read 8 blocks ...[2024-11-19 19:56:21.932844] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.260 passed 00:06:48.260 Test: blockdev write read size > 128k ...passed 00:06:48.260 Test: blockdev write read invalid size ...passed 00:06:48.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.260 Test: blockdev write read max offset ...passed 00:06:48.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.260 Test: blockdev writev readv 8 blocks ...passed 00:06:48.260 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.260 Test: blockdev writev readv block ...passed 00:06:48.260 Test: blockdev writev readv size > 128k ...passed 00:06:48.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.260 Test: blockdev comparev and writev ...[2024-11-19 19:56:21.944843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0638000 len:0x1000 00:06:48.260 [2024-11-19 19:56:21.944893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev nvme passthru rw ...passed 00:06:48.260 Test: blockdev nvme passthru vendor specific ...[2024-11-19 19:56:21.947359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.260 passed 00:06:48.260 Test: blockdev nvme admin passthru ...[2024-11-19 19:56:21.947394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev copy ...passed 00:06:48.260 Suite: bdevio tests on: Nvme2n1 00:06:48.260 Test: blockdev write read block ...passed 00:06:48.260 Test: blockdev write zeroes read block ...passed 00:06:48.260 Test: blockdev write zeroes read no split ...passed 00:06:48.260 Test: blockdev write zeroes read split ...passed 00:06:48.260 Test: blockdev write zeroes read split partial ...passed 00:06:48.260 Test: blockdev reset ...[2024-11-19 19:56:22.011484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.260 [2024-11-19 19:56:22.016926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.260 passed 00:06:48.260 Test: blockdev write read 8 blocks ...passed 00:06:48.260 Test: blockdev write read size > 128k ...passed 00:06:48.260 Test: blockdev write read invalid size ...passed 00:06:48.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.260 Test: blockdev write read max offset ...passed 00:06:48.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.260 Test: blockdev writev readv 8 blocks ...passed 00:06:48.260 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.260 Test: blockdev writev readv block ...passed 00:06:48.260 Test: blockdev writev readv size > 128k ...passed 00:06:48.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.260 Test: blockdev comparev and writev ...[2024-11-19 19:56:22.037351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0634000 len:0x1000 00:06:48.260 [2024-11-19 19:56:22.037396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev nvme passthru rw ...passed 00:06:48.260 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.260 Test: blockdev nvme admin passthru ...[2024-11-19 19:56:22.039677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.260 [2024-11-19 19:56:22.039713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.260 passed 00:06:48.260 Test: blockdev copy ...passed 00:06:48.260 Suite: bdevio tests on: Nvme1n1p2 00:06:48.260 Test: blockdev write read block ...passed 00:06:48.260 Test: blockdev write zeroes read block ...passed 00:06:48.520 Test: blockdev write zeroes read no split ...passed 00:06:48.520 Test: blockdev write zeroes read split ...passed 00:06:48.520 Test: blockdev write zeroes read split partial ...passed 00:06:48.520 Test: blockdev reset ...[2024-11-19 19:56:22.109016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:48.520 passed 00:06:48.520 Test: blockdev write read 8 blocks ...[2024-11-19 19:56:22.114844] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:48.520 passed 00:06:48.520 Test: blockdev write read size > 128k ...passed 00:06:48.520 Test: blockdev write read invalid size ...passed 00:06:48.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.520 Test: blockdev write read max offset ...passed 00:06:48.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.520 Test: blockdev writev readv 8 blocks ...passed 00:06:48.520 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.520 Test: blockdev writev readv block ...passed 00:06:48.520 Test: blockdev writev readv size > 128k ...passed 00:06:48.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.520 Test: blockdev comparev and writev ...[2024-11-19 19:56:22.130015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e0630000 len:0x1000 00:06:48.520 [2024-11-19 19:56:22.130074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:06:48.520 Test: blockdev nvme passthru rw ...passed 00:06:48.520 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.521 Test: blockdev nvme admin passthru ...passed 00:06:48.521 Test: blockdev copy ...0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.521 passed 00:06:48.521 Suite: bdevio tests on: Nvme1n1p1 00:06:48.521 Test: blockdev write read block ...passed 00:06:48.521 Test: blockdev write zeroes read block ...passed 00:06:48.521 Test: blockdev write zeroes read no split ...passed 00:06:48.521 Test: blockdev write zeroes read split ...passed 00:06:48.521 Test: blockdev write zeroes read split partial ...passed 00:06:48.521 Test: blockdev reset ...[2024-11-19 19:56:22.176506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:48.521 [2024-11-19 19:56:22.179112] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:48.521 passed 00:06:48.521 Test: blockdev write read 8 blocks ...passed 00:06:48.521 Test: blockdev write read size > 128k ...passed 00:06:48.521 Test: blockdev write read invalid size ...passed 00:06:48.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.521 Test: blockdev write read max offset ...passed 00:06:48.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.521 Test: blockdev writev readv 8 blocks ...passed 00:06:48.521 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.521 Test: blockdev writev readv block ...passed 00:06:48.521 Test: blockdev writev readv size > 128k ...passed 00:06:48.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.521 Test: blockdev comparev and writev ...[2024-11-19 19:56:22.187984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c120e000 len:0x1000 00:06:48.521 [2024-11-19 19:56:22.188033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.521 passed 00:06:48.521 Test: blockdev nvme passthru rw ...passed 00:06:48.521 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.521 Test: blockdev nvme admin passthru ...passed 00:06:48.521 Test: blockdev copy ...passed 00:06:48.521 Suite: bdevio tests on: Nvme0n1 00:06:48.521 Test: blockdev write read block ...passed 00:06:48.521 Test: blockdev write zeroes read block ...passed 00:06:48.521 Test: blockdev write zeroes read no split ...passed 00:06:48.521 Test: blockdev write zeroes read split ...passed 00:06:48.521 Test: blockdev write zeroes read split partial ...passed 00:06:48.521 Test: blockdev reset ...[2024-11-19 19:56:22.232408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:48.521 passed 00:06:48.521 Test: blockdev write read 8 blocks ...[2024-11-19 19:56:22.234945] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:48.521 passed 00:06:48.521 Test: blockdev write read size > 128k ...passed 00:06:48.521 Test: blockdev write read invalid size ...passed 00:06:48.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.521 Test: blockdev write read max offset ...passed 00:06:48.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.521 Test: blockdev writev readv 8 blocks ...passed 00:06:48.521 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.521 Test: blockdev writev readv block ...passed 00:06:48.521 Test: blockdev writev readv size > 128k ...passed 00:06:48.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.521 Test: blockdev comparev and writev ...[2024-11-19 19:56:22.240020] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:48.521 separate metadata which is not supported yet. 00:06:48.521 passed 00:06:48.521 Test: blockdev nvme passthru rw ...passed 00:06:48.521 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.521 Test: blockdev nvme admin passthru ...[2024-11-19 19:56:22.240421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:48.521 [2024-11-19 19:56:22.240466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:48.521 passed 00:06:48.521 Test: blockdev copy ...passed 00:06:48.521 00:06:48.521 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.521 suites 7 7 n/a 0 0 00:06:48.521 tests 161 161 161 0 0 00:06:48.521 asserts 1025 1025 1025 0 n/a 00:06:48.521 00:06:48.521 Elapsed time = 1.363 seconds 00:06:48.521 0 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61314 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61314 ']' 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61314 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61314 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.521 killing process with pid 61314 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61314' 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61314 00:06:48.521 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61314 00:06:49.089 19:56:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:49.089 00:06:49.089 real 0m2.099s 00:06:49.089 user 0m5.302s 00:06:49.089 sys 0m0.299s 00:06:49.089 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.089 19:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:49.089 ************************************ 00:06:49.089 END TEST bdev_bounds 00:06:49.089 ************************************ 00:06:49.089 19:56:22 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:49.089 19:56:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.089 19:56:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.089 19:56:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.349 ************************************ 00:06:49.349 START TEST bdev_nbd 00:06:49.349 ************************************ 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61368 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61368 /var/tmp/spdk-nbd.sock 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61368 ']' 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.349 19:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:49.349 [2024-11-19 19:56:22.959743] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:49.349 [2024-11-19 19:56:22.959855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.349 [2024-11-19 19:56:23.115720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.607 [2024-11-19 19:56:23.198631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.175 19:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.434 1+0 records in 00:06:50.434 1+0 records out 00:06:50.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335291 s, 12.2 MB/s 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.434 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.693 1+0 records in 00:06:50.693 1+0 records out 00:06:50.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352512 s, 11.6 MB/s 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:50.693 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.952 1+0 records in 00:06:50.952 1+0 records out 00:06:50.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277185 s, 14.8 MB/s 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.952 1+0 records in 00:06:50.952 1+0 records out 00:06:50.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259892 s, 15.8 MB/s 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.952 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.211 1+0 records in 00:06:51.211 1+0 records out 00:06:51.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325748 s, 12.6 MB/s 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.211 19:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.470 1+0 records in 00:06:51.470 1+0 records out 00:06:51.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404797 s, 10.1 MB/s 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.470 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.729 1+0 records in 00:06:51.729 1+0 records out 00:06:51.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031787 s, 12.9 MB/s 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.729 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd0", 00:06:51.987 "bdev_name": "Nvme0n1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd1", 00:06:51.987 "bdev_name": "Nvme1n1p1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd2", 00:06:51.987 "bdev_name": "Nvme1n1p2" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd3", 00:06:51.987 "bdev_name": "Nvme2n1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd4", 00:06:51.987 "bdev_name": "Nvme2n2" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd5", 00:06:51.987 "bdev_name": "Nvme2n3" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd6", 00:06:51.987 "bdev_name": "Nvme3n1" 00:06:51.987 } 00:06:51.987 ]' 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd0", 00:06:51.987 "bdev_name": "Nvme0n1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd1", 00:06:51.987 "bdev_name": "Nvme1n1p1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd2", 00:06:51.987 "bdev_name": "Nvme1n1p2" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd3", 00:06:51.987 "bdev_name": "Nvme2n1" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd4", 00:06:51.987 "bdev_name": "Nvme2n2" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd5", 00:06:51.987 "bdev_name": "Nvme2n3" 00:06:51.987 }, 00:06:51.987 { 00:06:51.987 "nbd_device": "/dev/nbd6", 00:06:51.987 "bdev_name": "Nvme3n1" 00:06:51.987 } 00:06:51.987 ]' 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.987 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.245 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.245 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.246 19:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.503 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.762 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.021 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.279 19:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.538 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.796 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:53.797 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:54.056 /dev/nbd0 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.056 1+0 records in 00:06:54.056 1+0 records out 00:06:54.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318268 s, 12.9 MB/s 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.056 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:54.056 /dev/nbd1 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.317 1+0 records in 00:06:54.317 1+0 records out 00:06:54.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262811 s, 15.6 MB/s 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.317 19:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:54.317 /dev/nbd10 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.317 1+0 records in 00:06:54.317 1+0 records out 00:06:54.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123942 s, 3.3 MB/s 00:06:54.317 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:54.580 /dev/nbd11 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.580 1+0 records in 00:06:54.580 1+0 records out 00:06:54.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120962 s, 3.4 MB/s 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.580 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:54.842 /dev/nbd12 00:06:54.842 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:54.842 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.843 1+0 records in 00:06:54.843 1+0 records out 00:06:54.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136 s, 3.0 MB/s 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.843 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:55.104 /dev/nbd13 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.104 1+0 records in 00:06:55.104 1+0 records out 00:06:55.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615078 s, 6.7 MB/s 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.104 19:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:55.366 /dev/nbd14 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.366 1+0 records in 00:06:55.366 1+0 records out 00:06:55.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120495 s, 3.4 MB/s 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.366 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.367 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.367 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd0", 00:06:55.628 "bdev_name": "Nvme0n1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd1", 00:06:55.628 "bdev_name": "Nvme1n1p1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd10", 00:06:55.628 "bdev_name": "Nvme1n1p2" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd11", 00:06:55.628 "bdev_name": "Nvme2n1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd12", 00:06:55.628 "bdev_name": "Nvme2n2" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd13", 00:06:55.628 "bdev_name": "Nvme2n3" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd14", 00:06:55.628 "bdev_name": "Nvme3n1" 00:06:55.628 } 00:06:55.628 ]' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd0", 00:06:55.628 "bdev_name": "Nvme0n1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd1", 00:06:55.628 "bdev_name": "Nvme1n1p1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd10", 00:06:55.628 "bdev_name": "Nvme1n1p2" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd11", 00:06:55.628 "bdev_name": "Nvme2n1" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd12", 00:06:55.628 "bdev_name": "Nvme2n2" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd13", 00:06:55.628 "bdev_name": "Nvme2n3" 00:06:55.628 }, 00:06:55.628 { 00:06:55.628 "nbd_device": "/dev/nbd14", 00:06:55.628 "bdev_name": "Nvme3n1" 00:06:55.628 } 00:06:55.628 ]' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.628 /dev/nbd1 00:06:55.628 /dev/nbd10 00:06:55.628 /dev/nbd11 00:06:55.628 /dev/nbd12 00:06:55.628 /dev/nbd13 00:06:55.628 /dev/nbd14' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.628 /dev/nbd1 00:06:55.628 /dev/nbd10 00:06:55.628 /dev/nbd11 00:06:55.628 /dev/nbd12 00:06:55.628 /dev/nbd13 00:06:55.628 /dev/nbd14' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:55.628 256+0 records in 00:06:55.628 256+0 records out 00:06:55.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068737 s, 153 MB/s 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.628 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.890 256+0 records in 00:06:55.890 256+0 records out 00:06:55.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206432 s, 5.1 MB/s 00:06:55.890 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.890 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.152 256+0 records in 00:06:56.152 256+0 records out 00:06:56.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.269747 s, 3.9 MB/s 00:06:56.152 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.152 19:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:56.414 256+0 records in 00:06:56.414 256+0 records out 00:06:56.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265832 s, 3.9 MB/s 00:06:56.414 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.414 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:56.676 256+0 records in 00:06:56.676 256+0 records out 00:06:56.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240671 s, 4.4 MB/s 00:06:56.676 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.676 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:56.937 256+0 records in 00:06:56.937 256+0 records out 00:06:56.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227577 s, 4.6 MB/s 00:06:56.937 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.937 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:57.199 256+0 records in 00:06:57.199 256+0 records out 00:06:57.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24763 s, 4.2 MB/s 00:06:57.199 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.199 19:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:57.461 256+0 records in 00:06:57.461 256+0 records out 00:06:57.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204817 s, 5.1 MB/s 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.461 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.722 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.009 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.270 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.271 19:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:58.271 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.271 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.271 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.271 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.532 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.794 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.056 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:59.319 19:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:59.319 malloc_lvol_verify 00:06:59.319 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:59.581 b729d2a3-19d0-4a75-a9ac-6158cc57c2a8 00:06:59.581 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:59.843 f4cd6a54-0e78-478d-ad32-288f27d056ff 00:06:59.843 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:00.107 /dev/nbd0 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:00.107 mke2fs 1.47.0 (5-Feb-2023) 00:07:00.107 Discarding device blocks: 0/4096 done 00:07:00.107 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:00.107 00:07:00.107 Allocating group tables: 0/1 done 00:07:00.107 Writing inode tables: 0/1 done 00:07:00.107 Creating journal (1024 blocks): done 00:07:00.107 Writing superblocks and filesystem accounting information: 0/1 done 00:07:00.107 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.107 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61368 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61368 ']' 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61368 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.378 19:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61368 00:07:00.378 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.378 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.378 killing process with pid 61368 00:07:00.378 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61368' 00:07:00.378 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61368 00:07:00.378 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61368 00:07:01.360 19:56:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:01.360 00:07:01.360 real 0m11.900s 00:07:01.360 user 0m16.106s 00:07:01.360 sys 0m3.986s 00:07:01.360 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.360 ************************************ 00:07:01.360 END TEST bdev_nbd 00:07:01.360 ************************************ 00:07:01.360 19:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:01.360 skipping fio tests on NVMe due to multi-ns failures. 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:01.360 19:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:01.360 19:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:01.360 19:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.360 19:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.360 ************************************ 00:07:01.360 START TEST bdev_verify 00:07:01.360 ************************************ 00:07:01.360 19:56:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:01.360 [2024-11-19 19:56:34.940015] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:01.360 [2024-11-19 19:56:34.940162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61793 ] 00:07:01.360 [2024-11-19 19:56:35.103125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.621 [2024-11-19 19:56:35.235991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.621 [2024-11-19 19:56:35.236088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.195 Running I/O for 5 seconds... 00:07:04.517 20736.00 IOPS, 81.00 MiB/s [2024-11-19T19:56:39.253Z] 21568.00 IOPS, 84.25 MiB/s [2024-11-19T19:56:40.195Z] 21226.67 IOPS, 82.92 MiB/s [2024-11-19T19:56:41.140Z] 20624.00 IOPS, 80.56 MiB/s [2024-11-19T19:56:41.140Z] 20172.80 IOPS, 78.80 MiB/s 00:07:07.346 Latency(us) 00:07:07.346 [2024-11-19T19:56:41.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.346 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x0 length 0xbd0bd 00:07:07.346 Nvme0n1 : 5.10 1406.41 5.49 0.00 0.00 90833.04 14720.39 77433.30 00:07:07.346 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:07.346 Nvme0n1 : 5.05 1419.29 5.54 0.00 0.00 89801.87 15829.46 91145.45 00:07:07.346 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x0 length 0x4ff80 00:07:07.346 Nvme1n1p1 : 5.10 1405.54 5.49 0.00 0.00 90766.26 16636.06 76626.71 00:07:07.346 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:07.346 Nvme1n1p1 : 5.08 1422.72 5.56 0.00 0.00 89240.95 10586.58 74610.22 00:07:07.346 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x0 length 0x4ff7f 00:07:07.346 Nvme1n1p2 : 5.10 1404.81 5.49 0.00 0.00 90655.81 17845.96 75013.51 00:07:07.346 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:07.346 Nvme1n1p2 : 5.10 1430.85 5.59 0.00 0.00 88769.81 12804.73 69367.34 00:07:07.346 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.346 Verification LBA range: start 0x0 length 0x80000 00:07:07.347 Nvme2n1 : 5.11 1404.11 5.48 0.00 0.00 90489.51 19156.68 73803.62 00:07:07.347 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x80000 length 0x80000 00:07:07.347 Nvme2n1 : 5.10 1430.36 5.59 0.00 0.00 88622.18 13006.38 67754.14 00:07:07.347 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x0 length 0x80000 00:07:07.347 Nvme2n2 : 5.11 1403.42 5.48 0.00 0.00 90346.84 18450.90 73400.32 00:07:07.347 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x80000 length 0x80000 00:07:07.347 Nvme2n2 : 5.10 1429.75 5.58 0.00 0.00 88500.80 14115.45 70980.53 00:07:07.347 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x0 length 0x80000 00:07:07.347 Nvme2n3 : 5.11 1402.70 5.48 0.00 0.00 90223.17 17341.83 75820.11 00:07:07.347 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x80000 length 0x80000 00:07:07.347 Nvme2n3 : 5.11 1429.03 5.58 0.00 0.00 88390.26 15728.64 72593.72 00:07:07.347 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x0 length 0x20000 00:07:07.347 Nvme3n1 : 5.11 1402.32 5.48 0.00 0.00 90074.43 11796.48 78239.90 00:07:07.347 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.347 Verification LBA range: start 0x20000 length 0x20000 00:07:07.347 Nvme3n1 : 5.11 1428.33 5.58 0.00 0.00 88261.25 14821.22 73803.62 00:07:07.347 [2024-11-19T19:56:41.141Z] =================================================================================================================== 00:07:07.347 [2024-11-19T19:56:41.141Z] Total : 19819.64 77.42 0.00 0.00 89633.73 10586.58 91145.45 00:07:08.734 00:07:08.734 real 0m7.355s 00:07:08.734 user 0m13.616s 00:07:08.734 sys 0m0.278s 00:07:08.734 19:56:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.734 ************************************ 00:07:08.734 END TEST bdev_verify 00:07:08.734 ************************************ 00:07:08.734 19:56:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:08.734 19:56:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:08.734 19:56:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:08.734 19:56:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.734 19:56:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.734 ************************************ 00:07:08.734 START TEST bdev_verify_big_io 00:07:08.734 ************************************ 00:07:08.734 19:56:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:08.734 [2024-11-19 19:56:42.362765] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:08.734 [2024-11-19 19:56:42.362901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61891 ] 00:07:08.995 [2024-11-19 19:56:42.527788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.995 [2024-11-19 19:56:42.666869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.995 [2024-11-19 19:56:42.666976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.936 Running I/O for 5 seconds... 00:07:14.741 1105.00 IOPS, 69.06 MiB/s [2024-11-19T19:56:49.918Z] 2520.00 IOPS, 157.50 MiB/s [2024-11-19T19:56:49.918Z] 3167.00 IOPS, 197.94 MiB/s 00:07:16.124 Latency(us) 00:07:16.124 [2024-11-19T19:56:49.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.124 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0xbd0b 00:07:16.124 Nvme0n1 : 5.90 108.90 6.81 0.00 0.00 1116394.59 26416.05 1264743.98 00:07:16.124 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:16.124 Nvme0n1 : 5.89 108.69 6.79 0.00 0.00 1110746.19 27021.00 1271196.75 00:07:16.124 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x4ff8 00:07:16.124 Nvme1n1p1 : 5.76 111.05 6.94 0.00 0.00 1075826.37 103244.41 1096971.82 00:07:16.124 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:16.124 Nvme1n1p1 : 5.75 111.25 6.95 0.00 0.00 1077483.05 104857.60 1096971.82 00:07:16.124 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x4ff7 00:07:16.124 Nvme1n1p2 : 5.91 112.03 7.00 0.00 0.00 1029020.29 142767.66 929199.66 00:07:16.124 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:16.124 Nvme1n1p2 : 5.89 112.51 7.03 0.00 0.00 1028392.22 137121.48 961463.53 00:07:16.124 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x8000 00:07:16.124 Nvme2n1 : 6.00 117.28 7.33 0.00 0.00 960929.05 90338.86 903388.55 00:07:16.124 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x8000 length 0x8000 00:07:16.124 Nvme2n1 : 5.94 118.61 7.41 0.00 0.00 957133.23 38515.00 987274.63 00:07:16.124 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x8000 00:07:16.124 Nvme2n2 : 6.06 122.84 7.68 0.00 0.00 891162.72 13006.38 903388.55 00:07:16.124 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x8000 length 0x8000 00:07:16.124 Nvme2n2 : 6.01 124.06 7.75 0.00 0.00 885252.56 34078.72 1006632.96 00:07:16.124 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x8000 00:07:16.124 Nvme2n3 : 6.12 119.09 7.44 0.00 0.00 886438.12 44564.48 1729343.80 00:07:16.124 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x8000 length 0x8000 00:07:16.124 Nvme2n3 : 6.02 127.64 7.98 0.00 0.00 832950.88 41943.04 1019538.51 00:07:16.124 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x0 length 0x2000 00:07:16.124 Nvme3n1 : 6.16 139.84 8.74 0.00 0.00 737362.82 7259.37 1768060.46 00:07:16.124 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.124 Verification LBA range: start 0x2000 length 0x2000 00:07:16.124 Nvme3n1 : 6.13 150.22 9.39 0.00 0.00 687971.46 932.63 1045349.61 00:07:16.124 [2024-11-19T19:56:49.918Z] =================================================================================================================== 00:07:16.124 [2024-11-19T19:56:49.918Z] Total : 1684.02 105.25 0.00 0.00 933728.79 932.63 1768060.46 00:07:18.040 00:07:18.040 real 0m9.166s 00:07:18.040 user 0m17.184s 00:07:18.040 sys 0m0.326s 00:07:18.040 19:56:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.040 19:56:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:18.040 ************************************ 00:07:18.040 END TEST bdev_verify_big_io 00:07:18.040 ************************************ 00:07:18.040 19:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.040 19:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.040 19:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.040 19:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.040 ************************************ 00:07:18.040 START TEST bdev_write_zeroes 00:07:18.040 ************************************ 00:07:18.040 19:56:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.040 [2024-11-19 19:56:51.594058] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:18.040 [2024-11-19 19:56:51.594190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62006 ] 00:07:18.040 [2024-11-19 19:56:51.754547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.299 [2024-11-19 19:56:51.878214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.867 Running I/O for 1 seconds... 00:07:19.806 10896.00 IOPS, 42.56 MiB/s 00:07:19.806 Latency(us) 00:07:19.806 [2024-11-19T19:56:53.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.806 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme0n1 : 1.02 1444.86 5.64 0.00 0.00 88406.13 6074.68 512995.64 00:07:19.806 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme1n1p1 : 1.02 1759.34 6.87 0.00 0.00 72496.41 11746.07 500090.09 00:07:19.806 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme1n1p2 : 1.03 1747.38 6.83 0.00 0.00 72581.91 11645.24 500090.09 00:07:19.806 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme2n1 : 1.03 1745.39 6.82 0.00 0.00 72406.84 11594.83 500090.09 00:07:19.806 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme2n2 : 1.03 1743.39 6.81 0.00 0.00 72360.88 11594.83 500090.09 00:07:19.806 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme2n3 : 1.03 1741.40 6.80 0.00 0.00 72393.06 11594.83 503316.48 00:07:19.806 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.806 Nvme3n1 : 1.03 1739.42 6.79 0.00 0.00 72311.48 10737.82 503316.48 00:07:19.806 [2024-11-19T19:56:53.600Z] =================================================================================================================== 00:07:19.806 [2024-11-19T19:56:53.600Z] Total : 11921.20 46.57 0.00 0.00 74354.89 6074.68 512995.64 00:07:20.748 00:07:20.748 real 0m2.817s 00:07:20.749 user 0m2.457s 00:07:20.749 sys 0m0.239s 00:07:20.749 19:56:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.749 ************************************ 00:07:20.749 END TEST bdev_write_zeroes 00:07:20.749 ************************************ 00:07:20.749 19:56:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:20.749 19:56:54 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.749 19:56:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.749 19:56:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.749 19:56:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.749 ************************************ 00:07:20.749 START TEST bdev_json_nonenclosed 00:07:20.749 ************************************ 00:07:20.749 19:56:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.749 [2024-11-19 19:56:54.485116] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:20.749 [2024-11-19 19:56:54.485284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:07:21.010 [2024-11-19 19:56:54.650149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.010 [2024-11-19 19:56:54.785767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.010 [2024-11-19 19:56:54.785870] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:21.010 [2024-11-19 19:56:54.785895] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.010 [2024-11-19 19:56:54.785905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.271 00:07:21.271 real 0m0.571s 00:07:21.271 user 0m0.350s 00:07:21.271 sys 0m0.114s 00:07:21.271 19:56:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.271 ************************************ 00:07:21.271 END TEST bdev_json_nonenclosed 00:07:21.271 ************************************ 00:07:21.271 19:56:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:21.271 19:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.271 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:21.271 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.271 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.271 ************************************ 00:07:21.271 START TEST bdev_json_nonarray 00:07:21.271 ************************************ 00:07:21.271 19:56:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.531 [2024-11-19 19:56:55.124268] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:21.531 [2024-11-19 19:56:55.124413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62090 ] 00:07:21.531 [2024-11-19 19:56:55.284623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.792 [2024-11-19 19:56:55.376263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.792 [2024-11-19 19:56:55.376339] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:21.792 [2024-11-19 19:56:55.376352] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.792 [2024-11-19 19:56:55.376360] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.792 00:07:21.792 real 0m0.458s 00:07:21.792 user 0m0.263s 00:07:21.792 sys 0m0.091s 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.792 ************************************ 00:07:21.792 END TEST bdev_json_nonarray 00:07:21.792 ************************************ 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:21.792 19:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:21.792 19:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:21.792 19:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:21.792 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.792 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.792 19:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.792 ************************************ 00:07:21.792 START TEST bdev_gpt_uuid 00:07:21.792 ************************************ 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62110 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62110 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62110 ']' 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.792 19:56:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.053 [2024-11-19 19:56:55.653981] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:22.054 [2024-11-19 19:56:55.654110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62110 ] 00:07:22.054 [2024-11-19 19:56:55.808378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.315 [2024-11-19 19:56:55.946148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.888 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.888 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:22.888 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.888 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.888 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.461 Some configs were skipped because the RPC state that can call them passed over. 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.461 19:56:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:23.461 { 00:07:23.461 "name": "Nvme1n1p1", 00:07:23.461 "aliases": [ 00:07:23.461 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:23.461 ], 00:07:23.461 "product_name": "GPT Disk", 00:07:23.461 "block_size": 4096, 00:07:23.461 "num_blocks": 655104, 00:07:23.461 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:23.461 "assigned_rate_limits": { 00:07:23.461 "rw_ios_per_sec": 0, 00:07:23.461 "rw_mbytes_per_sec": 0, 00:07:23.461 "r_mbytes_per_sec": 0, 00:07:23.461 "w_mbytes_per_sec": 0 00:07:23.461 }, 00:07:23.461 "claimed": false, 00:07:23.461 "zoned": false, 00:07:23.461 "supported_io_types": { 00:07:23.461 "read": true, 00:07:23.461 "write": true, 00:07:23.461 "unmap": true, 00:07:23.461 "flush": true, 00:07:23.461 "reset": true, 00:07:23.461 "nvme_admin": false, 00:07:23.461 "nvme_io": false, 00:07:23.461 "nvme_io_md": false, 00:07:23.461 "write_zeroes": true, 00:07:23.461 "zcopy": false, 00:07:23.461 "get_zone_info": false, 00:07:23.461 "zone_management": false, 00:07:23.461 "zone_append": false, 00:07:23.461 "compare": true, 00:07:23.461 "compare_and_write": false, 00:07:23.461 "abort": true, 00:07:23.461 "seek_hole": false, 00:07:23.461 "seek_data": false, 00:07:23.461 "copy": true, 00:07:23.461 "nvme_iov_md": false 00:07:23.461 }, 00:07:23.461 "driver_specific": { 00:07:23.461 "gpt": { 00:07:23.461 "base_bdev": "Nvme1n1", 00:07:23.461 "offset_blocks": 256, 00:07:23.461 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:23.461 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:23.461 "partition_name": "SPDK_TEST_first" 00:07:23.461 } 00:07:23.461 } 00:07:23.461 } 00:07:23.461 ]' 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.461 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:23.462 { 00:07:23.462 "name": "Nvme1n1p2", 00:07:23.462 "aliases": [ 00:07:23.462 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:23.462 ], 00:07:23.462 "product_name": "GPT Disk", 00:07:23.462 "block_size": 4096, 00:07:23.462 "num_blocks": 655103, 00:07:23.462 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:23.462 "assigned_rate_limits": { 00:07:23.462 "rw_ios_per_sec": 0, 00:07:23.462 "rw_mbytes_per_sec": 0, 00:07:23.462 "r_mbytes_per_sec": 0, 00:07:23.462 "w_mbytes_per_sec": 0 00:07:23.462 }, 00:07:23.462 "claimed": false, 00:07:23.462 "zoned": false, 00:07:23.462 "supported_io_types": { 00:07:23.462 "read": true, 00:07:23.462 "write": true, 00:07:23.462 "unmap": true, 00:07:23.462 "flush": true, 00:07:23.462 "reset": true, 00:07:23.462 "nvme_admin": false, 00:07:23.462 "nvme_io": false, 00:07:23.462 "nvme_io_md": false, 00:07:23.462 "write_zeroes": true, 00:07:23.462 "zcopy": false, 00:07:23.462 "get_zone_info": false, 00:07:23.462 "zone_management": false, 00:07:23.462 "zone_append": false, 00:07:23.462 "compare": true, 00:07:23.462 "compare_and_write": false, 00:07:23.462 "abort": true, 00:07:23.462 "seek_hole": false, 00:07:23.462 "seek_data": false, 00:07:23.462 "copy": true, 00:07:23.462 "nvme_iov_md": false 00:07:23.462 }, 00:07:23.462 "driver_specific": { 00:07:23.462 "gpt": { 00:07:23.462 "base_bdev": "Nvme1n1", 00:07:23.462 "offset_blocks": 655360, 00:07:23.462 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:23.462 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:23.462 "partition_name": "SPDK_TEST_second" 00:07:23.462 } 00:07:23.462 } 00:07:23.462 } 00:07:23.462 ]' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62110 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62110 ']' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62110 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62110 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.462 killing process with pid 62110 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62110' 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62110 00:07:23.462 19:56:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62110 00:07:25.375 00:07:25.375 real 0m3.316s 00:07:25.375 user 0m3.350s 00:07:25.375 sys 0m0.462s 00:07:25.375 19:56:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.375 ************************************ 00:07:25.375 END TEST bdev_gpt_uuid 00:07:25.375 ************************************ 00:07:25.375 19:56:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:25.375 19:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:25.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.897 Waiting for block devices as requested 00:07:25.897 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.897 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.897 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:26.157 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.485 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:31.485 19:57:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:31.485 19:57:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:31.485 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:31.485 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:31.485 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:31.485 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:31.485 19:57:05 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:31.485 ************************************ 00:07:31.485 END TEST blockdev_nvme_gpt 00:07:31.485 ************************************ 00:07:31.485 00:07:31.485 real 0m57.676s 00:07:31.485 user 1m12.440s 00:07:31.485 sys 0m8.576s 00:07:31.485 19:57:05 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.485 19:57:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.485 19:57:05 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:31.485 19:57:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.485 19:57:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.485 19:57:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.485 ************************************ 00:07:31.485 START TEST nvme 00:07:31.485 ************************************ 00:07:31.485 19:57:05 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:31.485 * Looking for test storage... 00:07:31.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:31.485 19:57:05 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.485 19:57:05 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.485 19:57:05 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.748 19:57:05 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.748 19:57:05 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.748 19:57:05 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.748 19:57:05 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.748 19:57:05 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.748 19:57:05 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:31.748 19:57:05 nvme -- scripts/common.sh@345 -- # : 1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.748 19:57:05 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.748 19:57:05 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@353 -- # local d=1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.748 19:57:05 nvme -- scripts/common.sh@355 -- # echo 1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.748 19:57:05 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@353 -- # local d=2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.748 19:57:05 nvme -- scripts/common.sh@355 -- # echo 2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.748 19:57:05 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.748 19:57:05 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.748 19:57:05 nvme -- scripts/common.sh@368 -- # return 0 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.748 --rc genhtml_branch_coverage=1 00:07:31.748 --rc genhtml_function_coverage=1 00:07:31.748 --rc genhtml_legend=1 00:07:31.748 --rc geninfo_all_blocks=1 00:07:31.748 --rc geninfo_unexecuted_blocks=1 00:07:31.748 00:07:31.748 ' 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.748 --rc genhtml_branch_coverage=1 00:07:31.748 --rc genhtml_function_coverage=1 00:07:31.748 --rc genhtml_legend=1 00:07:31.748 --rc geninfo_all_blocks=1 00:07:31.748 --rc geninfo_unexecuted_blocks=1 00:07:31.748 00:07:31.748 ' 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.748 --rc genhtml_branch_coverage=1 00:07:31.748 --rc genhtml_function_coverage=1 00:07:31.748 --rc genhtml_legend=1 00:07:31.748 --rc geninfo_all_blocks=1 00:07:31.748 --rc geninfo_unexecuted_blocks=1 00:07:31.748 00:07:31.748 ' 00:07:31.748 19:57:05 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.748 --rc genhtml_branch_coverage=1 00:07:31.748 --rc genhtml_function_coverage=1 00:07:31.748 --rc genhtml_legend=1 00:07:31.748 --rc geninfo_all_blocks=1 00:07:31.748 --rc geninfo_unexecuted_blocks=1 00:07:31.748 00:07:31.748 ' 00:07:31.748 19:57:05 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:32.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.896 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.896 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.896 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.896 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.896 19:57:06 nvme -- nvme/nvme.sh@79 -- # uname 00:07:32.896 19:57:06 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:32.896 19:57:06 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:32.896 19:57:06 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1075 -- # stubpid=62757 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:32.896 Waiting for stub to ready for secondary processes... 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62757 ]] 00:07:32.896 19:57:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:32.896 [2024-11-19 19:57:06.581846] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:32.896 [2024-11-19 19:57:06.581996] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:33.842 [2024-11-19 19:57:07.497487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.842 19:57:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:33.842 19:57:07 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62757 ]] 00:07:33.842 19:57:07 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:33.842 [2024-11-19 19:57:07.619339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.842 [2024-11-19 19:57:07.619615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.842 [2024-11-19 19:57:07.619719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.103 [2024-11-19 19:57:07.642834] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:34.103 [2024-11-19 19:57:07.642927] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:34.103 [2024-11-19 19:57:07.658642] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:34.103 [2024-11-19 19:57:07.658867] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:34.103 [2024-11-19 19:57:07.664413] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:34.103 [2024-11-19 19:57:07.664802] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:34.103 [2024-11-19 19:57:07.664925] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:34.103 [2024-11-19 19:57:07.669572] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:34.103 [2024-11-19 19:57:07.669763] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:34.103 [2024-11-19 19:57:07.669824] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:34.103 [2024-11-19 19:57:07.673793] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:34.103 [2024-11-19 19:57:07.673997] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:34.103 [2024-11-19 19:57:07.674056] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:34.103 [2024-11-19 19:57:07.674090] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:34.103 [2024-11-19 19:57:07.674118] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:35.045 done. 00:07:35.045 19:57:08 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:35.045 19:57:08 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:35.045 19:57:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:35.045 19:57:08 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:35.045 19:57:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.045 19:57:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.045 ************************************ 00:07:35.045 START TEST nvme_reset 00:07:35.045 ************************************ 00:07:35.045 19:57:08 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:35.045 Initializing NVMe Controllers 00:07:35.045 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:35.045 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:35.045 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:35.045 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:35.045 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:35.045 ************************************ 00:07:35.045 END TEST nvme_reset 00:07:35.045 ************************************ 00:07:35.045 00:07:35.045 real 0m0.248s 00:07:35.045 user 0m0.088s 00:07:35.045 sys 0m0.108s 00:07:35.045 19:57:08 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.045 19:57:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 19:57:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:35.306 19:57:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.306 19:57:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.306 19:57:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 ************************************ 00:07:35.306 START TEST nvme_identify 00:07:35.306 ************************************ 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:35.306 19:57:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:35.306 19:57:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:35.306 19:57:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:35.306 19:57:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:35.306 19:57:08 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:35.306 19:57:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:35.578 [2024-11-19 19:57:09.161977] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62790 terminated unexpected 00:07:35.578 ===================================================== 00:07:35.578 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.578 ===================================================== 00:07:35.578 Controller Capabilities/Features 00:07:35.578 ================================ 00:07:35.578 Vendor ID: 1b36 00:07:35.578 Subsystem Vendor ID: 1af4 00:07:35.578 Serial Number: 12343 00:07:35.578 Model Number: QEMU NVMe Ctrl 00:07:35.578 Firmware Version: 8.0.0 00:07:35.578 Recommended Arb Burst: 6 00:07:35.578 IEEE OUI Identifier: 00 54 52 00:07:35.578 Multi-path I/O 00:07:35.578 May have multiple subsystem ports: No 00:07:35.578 May have multiple controllers: Yes 00:07:35.578 Associated with SR-IOV VF: No 00:07:35.578 Max Data Transfer Size: 524288 00:07:35.578 Max Number of Namespaces: 256 00:07:35.578 Max Number of I/O Queues: 64 00:07:35.578 NVMe Specification Version (VS): 1.4 00:07:35.578 NVMe Specification Version (Identify): 1.4 00:07:35.578 Maximum Queue Entries: 2048 00:07:35.578 Contiguous Queues Required: Yes 00:07:35.578 Arbitration Mechanisms Supported 00:07:35.578 Weighted Round Robin: Not Supported 00:07:35.578 Vendor Specific: Not Supported 00:07:35.579 Reset Timeout: 7500 ms 00:07:35.579 Doorbell Stride: 4 bytes 00:07:35.579 NVM Subsystem Reset: Not Supported 00:07:35.579 Command Sets Supported 00:07:35.579 NVM Command Set: Supported 00:07:35.579 Boot Partition: Not Supported 00:07:35.579 Memory Page Size Minimum: 4096 bytes 00:07:35.579 Memory Page Size Maximum: 65536 bytes 00:07:35.579 Persistent Memory Region: Not Supported 00:07:35.579 Optional Asynchronous Events Supported 00:07:35.579 Namespace Attribute Notices: Supported 00:07:35.579 Firmware Activation Notices: Not Supported 00:07:35.579 ANA Change Notices: Not Supported 00:07:35.579 PLE Aggregate Log Change Notices: Not Supported 00:07:35.579 LBA Status Info Alert Notices: Not Supported 00:07:35.579 EGE Aggregate Log Change Notices: Not Supported 00:07:35.579 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.579 Zone Descriptor Change Notices: Not Supported 00:07:35.579 Discovery Log Change Notices: Not Supported 00:07:35.579 Controller Attributes 00:07:35.579 128-bit Host Identifier: Not Supported 00:07:35.579 Non-Operational Permissive Mode: Not Supported 00:07:35.579 NVM Sets: Not Supported 00:07:35.579 Read Recovery Levels: Not Supported 00:07:35.579 Endurance Groups: Supported 00:07:35.579 Predictable Latency Mode: Not Supported 00:07:35.579 Traffic Based Keep ALive: Not Supported 00:07:35.579 Namespace Granularity: Not Supported 00:07:35.579 SQ Associations: Not Supported 00:07:35.579 UUID List: Not Supported 00:07:35.579 Multi-Domain Subsystem: Not Supported 00:07:35.579 Fixed Capacity Management: Not Supported 00:07:35.579 Variable Capacity Management: Not Supported 00:07:35.579 Delete Endurance Group: Not Supported 00:07:35.579 Delete NVM Set: Not Supported 00:07:35.579 Extended LBA Formats Supported: Supported 00:07:35.579 Flexible Data Placement Supported: Supported 00:07:35.579 00:07:35.579 Controller Memory Buffer Support 00:07:35.579 ================================ 00:07:35.579 Supported: No 00:07:35.579 00:07:35.579 Persistent Memory Region Support 00:07:35.579 ================================ 00:07:35.579 Supported: No 00:07:35.579 00:07:35.579 Admin Command Set Attributes 00:07:35.579 ============================ 00:07:35.579 Security Send/Receive: Not Supported 00:07:35.580 Format NVM: Supported 00:07:35.580 Firmware Activate/Download: Not Supported 00:07:35.580 Namespace Management: Supported 00:07:35.580 Device Self-Test: Not Supported 00:07:35.580 Directives: Supported 00:07:35.580 NVMe-MI: Not Supported 00:07:35.580 Virtualization Management: Not Supported 00:07:35.580 Doorbell Buffer Config: Supported 00:07:35.580 Get LBA Status Capability: Not Supported 00:07:35.580 Command & Feature Lockdown Capability: Not Supported 00:07:35.580 Abort Command Limit: 4 00:07:35.580 Async Event Request Limit: 4 00:07:35.580 Number of Firmware Slots: N/A 00:07:35.580 Firmware Slot 1 Read-Only: N/A 00:07:35.580 Firmware Activation Without Reset: N/A 00:07:35.580 Multiple Update Detection Support: N/A 00:07:35.580 Firmware Update Granularity: No Information Provided 00:07:35.580 Per-Namespace SMART Log: Yes 00:07:35.580 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.580 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:35.580 Command Effects Log Page: Supported 00:07:35.580 Get Log Page Extended Data: Supported 00:07:35.580 Telemetry Log Pages: Not Supported 00:07:35.580 Persistent Event Log Pages: Not Supported 00:07:35.580 Supported Log Pages Log Page: May Support 00:07:35.580 Commands Supported & Effects Log Page: Not Supported 00:07:35.580 Feature Identifiers & Effects Log Page:May Support 00:07:35.580 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.580 Data Area 4 for Telemetry Log: Not Supported 00:07:35.580 Error Log Page Entries Supported: 1 00:07:35.580 Keep Alive: Not Supported 00:07:35.580 00:07:35.580 NVM Command Set Attributes 00:07:35.580 ========================== 00:07:35.580 Submission Queue Entry Size 00:07:35.580 Max: 64 00:07:35.580 Min: 64 00:07:35.580 Completion Queue Entry Size 00:07:35.580 Max: 16 00:07:35.580 Min: 16 00:07:35.580 Number of Namespaces: 256 00:07:35.580 Compare Command: Supported 00:07:35.580 Write Uncorrectable Command: Not Supported 00:07:35.580 Dataset Management Command: Supported 00:07:35.580 Write Zeroes Command: Supported 00:07:35.581 Set Features Save Field: Supported 00:07:35.581 Reservations: Not Supported 00:07:35.581 Timestamp: Supported 00:07:35.581 Copy: Supported 00:07:35.581 Volatile Write Cache: Present 00:07:35.581 Atomic Write Unit (Normal): 1 00:07:35.581 Atomic Write Unit (PFail): 1 00:07:35.581 Atomic Compare & Write Unit: 1 00:07:35.581 Fused Compare & Write: Not Supported 00:07:35.581 Scatter-Gather List 00:07:35.581 SGL Command Set: Supported 00:07:35.581 SGL Keyed: Not Supported 00:07:35.581 SGL Bit Bucket Descriptor: Not Supported 00:07:35.581 SGL Metadata Pointer: Not Supported 00:07:35.581 Oversized SGL: Not Supported 00:07:35.581 SGL Metadata Address: Not Supported 00:07:35.581 SGL Offset: Not Supported 00:07:35.581 Transport SGL Data Block: Not Supported 00:07:35.581 Replay Protected Memory Block: Not Supported 00:07:35.581 00:07:35.581 Firmware Slot Information 00:07:35.581 ========================= 00:07:35.581 Active slot: 1 00:07:35.581 Slot 1 Firmware Revision: 1.0 00:07:35.581 00:07:35.581 00:07:35.581 Commands Supported and Effects 00:07:35.581 ============================== 00:07:35.581 Admin Commands 00:07:35.581 -------------- 00:07:35.581 Delete I/O Submission Queue (00h): Supported 00:07:35.581 Create I/O Submission Queue (01h): Supported 00:07:35.581 Get Log Page (02h): Supported 00:07:35.581 Delete I/O Completion Queue (04h): Supported 00:07:35.581 Create I/O Completion Queue (05h): Supported 00:07:35.581 Identify (06h): Supported 00:07:35.581 Abort (08h): Supported 00:07:35.581 Set Features (09h): Supported 00:07:35.581 Get Features (0Ah): Supported 00:07:35.581 Asynchronous Event Request (0Ch): Supported 00:07:35.581 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.581 Directive Send (19h): Supported 00:07:35.581 Directive Receive (1Ah): Supported 00:07:35.581 Virtualization Management (1Ch): Supported 00:07:35.581 Doorbell Buffer Config (7Ch): Supported 00:07:35.581 Format NVM (80h): Supported LBA-Change 00:07:35.581 I/O Commands 00:07:35.581 ------------ 00:07:35.582 Flush (00h): Supported LBA-Change 00:07:35.582 Write (01h): Supported LBA-Change 00:07:35.582 Read (02h): Supported 00:07:35.582 Compare (05h): Supported 00:07:35.582 Write Zeroes (08h): Supported LBA-Change 00:07:35.582 Dataset Management (09h): Supported LBA-Change 00:07:35.582 Unknown (0Ch): Supported 00:07:35.582 Unknown (12h): Supported 00:07:35.582 Copy (19h): Supported LBA-Change 00:07:35.582 Unknown (1Dh): Supported LBA-Change 00:07:35.582 00:07:35.582 Error Log 00:07:35.582 ========= 00:07:35.582 00:07:35.582 Arbitration 00:07:35.582 =========== 00:07:35.582 Arbitration Burst: no limit 00:07:35.582 00:07:35.582 Power Management 00:07:35.582 ================ 00:07:35.582 Number of Power States: 1 00:07:35.582 Current Power State: Power State #0 00:07:35.582 Power State #0: 00:07:35.582 Max Power: 25.00 W 00:07:35.582 Non-Operational State: Operational 00:07:35.582 Entry Latency: 16 microseconds 00:07:35.582 Exit Latency: 4 microseconds 00:07:35.582 Relative Read Throughput: 0 00:07:35.582 Relative Read Latency: 0 00:07:35.582 Relative Write Throughput: 0 00:07:35.582 Relative Write Latency: 0 00:07:35.582 Idle Power: Not Reported 00:07:35.582 Active Power: Not Reported 00:07:35.582 Non-Operational Permissive Mode: Not Supported 00:07:35.582 00:07:35.582 Health Information 00:07:35.582 ================== 00:07:35.582 Critical Warnings: 00:07:35.582 Available Spare Space: OK 00:07:35.582 Temperature: OK 00:07:35.582 Device Reliability: OK 00:07:35.582 Read Only: No 00:07:35.582 Volatile Memory Backup: OK 00:07:35.582 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.582 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.582 Available Spare: 0% 00:07:35.582 Available Spare Threshold: 0% 00:07:35.582 Life Percentage Used: 0% 00:07:35.582 Data Units Read: 814 00:07:35.582 Data Units Written: 743 00:07:35.582 Host Read Commands: 35416 00:07:35.582 Host Write Commands: 34840 00:07:35.582 Controller Busy Time: 0 minutes 00:07:35.582 Power Cycles: 0 00:07:35.582 Power On Hours: 0 hours 00:07:35.582 Unsafe Shutdowns: 0 00:07:35.582 Unrecoverable Media Errors: 0 00:07:35.582 Lifetime Error Log Entries: 0 00:07:35.582 Warning Temperature Time: 0 minutes 00:07:35.582 Critical Temperature Time: 0 minutes 00:07:35.583 00:07:35.583 Number of Queues 00:07:35.583 ================ 00:07:35.583 Number of I/O Submission Queues: 64 00:07:35.583 Number of I/O Completion Queues: 64 00:07:35.583 00:07:35.583 ZNS Specific Controller Data 00:07:35.583 ============================ 00:07:35.583 Zone Append Size Limit: 0 00:07:35.583 00:07:35.583 00:07:35.583 Active Namespaces 00:07:35.583 ================= 00:07:35.583 Namespace ID:1 00:07:35.583 Error Recovery Timeout: Unlimited 00:07:35.583 Command Set Identifier: NVM (00h) 00:07:35.583 Deallocate: Supported 00:07:35.583 Deallocated/Unwritten Error: Supported 00:07:35.583 Deallocated Read Value: All 0x00 00:07:35.583 Deallocate in Write Zeroes: Not Supported 00:07:35.583 Deallocated Guard Field: 0xFFFF 00:07:35.583 Flush: Supported 00:07:35.583 Reservation: Not Supported 00:07:35.583 Namespace Sharing Capabilities: Multiple Controllers 00:07:35.583 Size (in LBAs): 262144 (1GiB) 00:07:35.583 Capacity (in LBAs): 262144 (1GiB) 00:07:35.583 Utilization (in LBAs): 262144 (1GiB) 00:07:35.583 Thin Provisioning: Not Supported 00:07:35.583 Per-NS Atomic Units: No 00:07:35.583 Maximum Single Source Range Length: 128 00:07:35.583 Maximum Copy Length: 128 00:07:35.583 Maximum Source Range Count: 128 00:07:35.583 NGUID/EUI64 Never Reused: No 00:07:35.583 Namespace Write Protected: No 00:07:35.583 Endurance group ID: 1 00:07:35.583 Number of LBA Formats: 8 00:07:35.583 Current LBA Format: LBA Format #04 00:07:35.583 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.583 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.583 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.583 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.583 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.583 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.583 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.583 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.583 00:07:35.583 Get Feature FDP: 00:07:35.583 ================ 00:07:35.583 Enabled: Yes 00:07:35.583 FDP configuration index: 0 00:07:35.583 00:07:35.583 FDP configurations log page 00:07:35.583 =========================== 00:07:35.583 Number of FDP configurations: 1 00:07:35.583 Version: 0 00:07:35.583 Size: 112 00:07:35.584 FDP Configuration Descriptor: 0 00:07:35.584 Descriptor Size: 96 00:07:35.584 Reclaim Group Identifier format: 2 00:07:35.584 FDP Volatile Write Cache: Not Present 00:07:35.584 FDP Configuration: Valid 00:07:35.584 Vendor Specific Size: 0 00:07:35.584 Number of Reclaim Groups: 2 00:07:35.584 Number of Recalim Unit Handles: 8 00:07:35.584 Max Placement Identifiers: 128 00:07:35.584 Number of Namespaces Suppprted: 256 00:07:35.584 Reclaim unit Nominal Size: 6000000 bytes 00:07:35.584 Estimated Reclaim Unit Time Limit: Not Reported 00:07:35.584 RUH Desc #000: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #001: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #002: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #003: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #004: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #005: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #006: RUH Type: Initially Isolated 00:07:35.584 RUH Desc #007: RUH Type: Initially Isolated 00:07:35.584 00:07:35.584 FDP reclaim unit handle usage log page 00:07:35.584 ==================================[2024-11-19 19:57:09.164723] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62790 terminated unexpected 00:07:35.584 ==== 00:07:35.584 Number of Reclaim Unit Handles: 8 00:07:35.584 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:35.584 RUH Usage Desc #001: RUH Attributes: Unused 00:07:35.584 RUH Usage Desc #002: RUH Attributes: Unused 00:07:35.584 RUH Usage Desc #003: RUH Attributes: Unused 00:07:35.584 RUH Usage Desc #004: RUH Attributes: Unused 00:07:35.584 RUH Usage Desc #005: RUH Attributes: Unused 00:07:35.584 RUH Usage Desc #006: RUH Attributes: Unused 00:07:35.585 RUH Usage Desc #007: RUH Attributes: Unused 00:07:35.585 00:07:35.585 FDP statistics log page 00:07:35.585 ======================= 00:07:35.585 Host bytes with metadata written: 450666496 00:07:35.585 Media bytes with metadata written: 450719744 00:07:35.585 Media bytes erased: 0 00:07:35.585 00:07:35.585 FDP events log page 00:07:35.585 =================== 00:07:35.585 Number of FDP events: 0 00:07:35.585 00:07:35.585 NVM Specific Namespace Data 00:07:35.585 =========================== 00:07:35.585 Logical Block Storage Tag Mask: 0 00:07:35.585 Protection Information Capabilities: 00:07:35.585 16b Guard Protection Information Storage Tag Support: No 00:07:35.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.585 Storage Tag Check Read Support: No 00:07:35.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.585 ===================================================== 00:07:35.585 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.585 ===================================================== 00:07:35.585 Controller Capabilities/Features 00:07:35.585 ================================ 00:07:35.585 Vendor ID: 1b36 00:07:35.585 Subsystem Vendor ID: 1af4 00:07:35.585 Serial Number: 12340 00:07:35.585 Model Number: QEMU NVMe Ctrl 00:07:35.585 Firmware Version: 8.0.0 00:07:35.585 Recommended Arb Burst: 6 00:07:35.585 IEEE OUI Identifier: 00 54 52 00:07:35.585 Multi-path I/O 00:07:35.585 May have multiple subsystem ports: No 00:07:35.585 May have multiple controllers: No 00:07:35.586 Associated with SR-IOV VF: No 00:07:35.586 Max Data Transfer Size: 524288 00:07:35.586 Max Number of Namespaces: 256 00:07:35.586 Max Number of I/O Queues: 64 00:07:35.586 NVMe Specification Version (VS): 1.4 00:07:35.586 NVMe Specification Version (Identify): 1.4 00:07:35.586 Maximum Queue Entries: 2048 00:07:35.586 Contiguous Queues Required: Yes 00:07:35.586 Arbitration Mechanisms Supported 00:07:35.586 Weighted Round Robin: Not Supported 00:07:35.586 Vendor Specific: Not Supported 00:07:35.586 Reset Timeout: 7500 ms 00:07:35.586 Doorbell Stride: 4 bytes 00:07:35.586 NVM Subsystem Reset: Not Supported 00:07:35.586 Command Sets Supported 00:07:35.586 NVM Command Set: Supported 00:07:35.586 Boot Partition: Not Supported 00:07:35.586 Memory Page Size Minimum: 4096 bytes 00:07:35.586 Memory Page Size Maximum: 65536 bytes 00:07:35.586 Persistent Memory Region: Not Supported 00:07:35.586 Optional Asynchronous Events Supported 00:07:35.586 Namespace Attribute Notices: Supported 00:07:35.586 Firmware Activation Notices: Not Supported 00:07:35.586 ANA Change Notices: Not Supported 00:07:35.586 PLE Aggregate Log Change Notices: Not Supported 00:07:35.586 LBA Status Info Alert Notices: Not Supported 00:07:35.586 EGE Aggregate Log Change Notices: Not Supported 00:07:35.586 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.586 Zone Descriptor Change Notices: Not Supported 00:07:35.586 Discovery Log Change Notices: Not Supported 00:07:35.586 Controller Attributes 00:07:35.586 128-bit Host Identifier: Not Supported 00:07:35.587 Non-Operational Permissive Mode: Not Supported 00:07:35.587 NVM Sets: Not Supported 00:07:35.587 Read Recovery Levels: Not Supported 00:07:35.587 Endurance Groups: Not Supported 00:07:35.587 Predictable Latency Mode: Not Supported 00:07:35.587 Traffic Based Keep ALive: Not Supported 00:07:35.587 Namespace Granularity: Not Supported 00:07:35.587 SQ Associations: Not Supported 00:07:35.587 UUID List: Not Supported 00:07:35.587 Multi-Domain Subsystem: Not Supported 00:07:35.587 Fixed Capacity Management: Not Supported 00:07:35.587 Variable Capacity Management: Not Supported 00:07:35.587 Delete Endurance Group: Not Supported 00:07:35.587 Delete NVM Set: Not Supported 00:07:35.587 Extended LBA Formats Supported: Supported 00:07:35.587 Flexible Data Placement Supported: Not Supported 00:07:35.587 00:07:35.587 Controller Memory Buffer Support 00:07:35.587 ================================ 00:07:35.587 Supported: No 00:07:35.587 00:07:35.587 Persistent Memory Region Support 00:07:35.587 ================================ 00:07:35.587 Supported: No 00:07:35.587 00:07:35.587 Admin Command Set Attributes 00:07:35.587 ============================ 00:07:35.587 Security Send/Receive: Not Supported 00:07:35.587 Format NVM: Supported 00:07:35.587 Firmware Activate/Download: Not Supported 00:07:35.587 Namespace Management: Supported 00:07:35.587 Device Self-Test: Not Supported 00:07:35.587 Directives: Supported 00:07:35.587 NVMe-MI: Not Supported 00:07:35.587 Virtualization Management: Not Supported 00:07:35.587 Doorbell Buffer Config: Supported 00:07:35.587 Get LBA Status Capability: Not Supported 00:07:35.587 Command & Feature Lockdown Capability: Not Supported 00:07:35.587 Abort Command Limit: 4 00:07:35.587 Async Event Request Limit: 4 00:07:35.587 Number of Firmware Slots: N/A 00:07:35.587 Firmware Slot 1 Read-Only: N/A 00:07:35.587 Firmware Activation Without Reset: N/A 00:07:35.587 Multiple Update Detection Support: N/A 00:07:35.587 Firmware Update Granularity: No Information Provided 00:07:35.588 Per-Namespace SMART Log: Yes 00:07:35.588 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.588 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:35.588 Command Effects Log Page: Supported 00:07:35.588 Get Log Page Extended Data: Supported 00:07:35.588 Telemetry Log Pages: Not Supported 00:07:35.588 Persistent Event Log Pages: Not Supported 00:07:35.588 Supported Log Pages Log Page: May Support 00:07:35.588 Commands Supported & Effects Log Page: Not Supported 00:07:35.588 Feature Identifiers & Effects Log Page:May Support 00:07:35.588 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.588 Data Area 4 for Telemetry Log: Not Supported 00:07:35.588 Error Log Page Entries Supported: 1 00:07:35.588 Keep Alive: Not Supported 00:07:35.588 00:07:35.588 NVM Command Set Attributes 00:07:35.588 ========================== 00:07:35.588 Submission Queue Entry Size 00:07:35.588 Max: 64 00:07:35.588 Min: 64 00:07:35.588 Completion Queue Entry Size 00:07:35.588 Max: 16 00:07:35.588 Min: 16 00:07:35.588 Number of Namespaces: 256 00:07:35.588 Compare Command: Supported 00:07:35.588 Write Uncorrectable Command: Not Supported 00:07:35.588 Dataset Management Command: Supported 00:07:35.588 Write Zeroes Command: Supported 00:07:35.588 Set Features Save Field: Supported 00:07:35.588 Reservations: Not Supported 00:07:35.588 Timestamp: Supported 00:07:35.588 Copy: Supported 00:07:35.588 Volatile Write Cache: Present 00:07:35.588 Atomic Write Unit (Normal): 1 00:07:35.588 Atomic Write Unit (PFail): 1 00:07:35.588 Atomic Compare & Write Unit: 1 00:07:35.588 Fused Compare & Write: Not Supported 00:07:35.588 Scatter-Gather List 00:07:35.588 SGL Command Set: Supported 00:07:35.588 SGL Keyed: Not Supported 00:07:35.588 SGL Bit Bucket Descriptor: Not Supported 00:07:35.588 SGL Metadata Pointer: Not Supported 00:07:35.589 Oversized SGL: Not Supported 00:07:35.589 SGL Metadata Address: Not Supported 00:07:35.589 SGL Offset: Not Supported 00:07:35.589 Transport SGL Data Block: Not Supported 00:07:35.589 Replay Protected Memory Block: Not Supported 00:07:35.589 00:07:35.589 Firmware Slot Information 00:07:35.589 ========================= 00:07:35.589 Active slot: 1 00:07:35.589 Slot 1 Firmware Revision: 1.0 00:07:35.589 00:07:35.589 00:07:35.589 Commands Supported and Effects 00:07:35.589 ============================== 00:07:35.589 Admin Commands 00:07:35.589 -------------- 00:07:35.589 Delete I/O Submission Queue (00h): Supported 00:07:35.589 Create I/O Submission Queue (01h): Supported 00:07:35.589 Get Log Page (02h): Supported 00:07:35.589 Delete I/O Completion Queue (04h): Supported 00:07:35.589 Create I/O Completion Queue (05h): Supported 00:07:35.589 Identify (06h): Supported 00:07:35.589 Abort (08h): Supported 00:07:35.589 Set Features (09h): Supported 00:07:35.589 Get Features (0Ah): Supported 00:07:35.589 Asynchronous Event Request (0Ch): Supported 00:07:35.589 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.589 Directive Send (19h): Supported 00:07:35.589 Directive Receive (1Ah): Supported 00:07:35.589 Virtualization Management (1Ch): Supported 00:07:35.589 Doorbell Buffer Config (7Ch): Supported 00:07:35.589 Format NVM (80h): Supported LBA-Change 00:07:35.589 I/O Commands 00:07:35.589 ------------ 00:07:35.589 Flush (00h): Supported LBA-Change 00:07:35.589 Write (01h): Supported LBA-Change 00:07:35.589 Read (02h): Supported 00:07:35.589 Compare (05h): Supported 00:07:35.589 Write Zeroes (08h): Supported LBA-Change 00:07:35.589 Dataset Management (09h): Supported LBA-Change 00:07:35.589 Unknown (0Ch): Supported 00:07:35.589 Unknown (12h): Supported 00:07:35.589 Copy (19h): Supported LBA-Change 00:07:35.589 Unknown (1Dh): Supported LBA-Change 00:07:35.589 00:07:35.589 Error Log 00:07:35.589 ========= 00:07:35.589 00:07:35.589 Arbitration 00:07:35.589 =========== 00:07:35.589 Arbitration Burst: no limit 00:07:35.589 00:07:35.589 Power Management 00:07:35.589 ================ 00:07:35.589 Number of Power States: 1 00:07:35.589 Current Power State: Power State #0 00:07:35.589 Power State #0: 00:07:35.589 Max Power: 25.00 W 00:07:35.589 Non-Operational State: Operational 00:07:35.590 Entry Latency: 16 microseconds 00:07:35.590 Exit Latency: 4 microseconds 00:07:35.590 Relative Read Throughput: 0 00:07:35.590 Relative Read Latency: 0 00:07:35.590 Relative Write Throughput: 0 00:07:35.590 Relative Write Latency: 0 00:07:35.590 Idle Power: Not Reported 00:07:35.590 Active Power: Not Reported 00:07:35.590 Non-Operational Permissive Mode: Not Supported 00:07:35.590 00:07:35.590 Health Information 00:07:35.590 ================== 00:07:35.590 Critical Warnings: 00:07:35.590 Available Spare Space: OK 00:07:35.590 Temperature: OK 00:07:35.590 Device Reliability: OK 00:07:35.590 Read Only: No 00:07:35.590 Volatile Memory Backup: OK 00:07:35.590 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.590 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.590 Available Spare: 0% 00:07:35.590 Available Spare Threshold: 0% 00:07:35.590 Life Percentage Used: 0% 00:07:35.590 Data Units Read: 656 00:07:35.590 Data Units Written: 584 00:07:35.590 Host Read Commands: 33946 00:07:35.590 Host Write Commands: 33732 00:07:35.590 Controller Busy Time: 0 minutes 00:07:35.590 Power Cycles: 0 00:07:35.590 Power On Hours: 0 hours 00:07:35.590 Unsafe Shutdowns: 0 00:07:35.590 Unrecoverable Media Errors: 0 00:07:35.590 Lifetime Error Log Entries: 0 00:07:35.590 Warning Temperature Time: 0 minutes 00:07:35.590 Critical Temperature Time: 0 minutes 00:07:35.590 00:07:35.590 Number of Queues 00:07:35.590 ================ 00:07:35.590 Number of I/O Submission Queues: 64 00:07:35.590 Number of I/O Completion Queues: 64 00:07:35.590 00:07:35.590 ZNS Specific Controller Data 00:07:35.590 ============================ 00:07:35.590 Zone Append Size Limit: 0 00:07:35.590 00:07:35.590 00:07:35.590 Active Namespaces 00:07:35.590 ================= 00:07:35.590 Namespace ID:1 00:07:35.590 Error Recovery Timeout: Unlimited 00:07:35.591 Command Set Identifier: NVM (00h) 00:07:35.591 Deallocate: Supported 00:07:35.591 Deallocated/Unwritten Error: Supported 00:07:35.591 Deallocated Read Value: All 0x00 00:07:35.591 Deallocate in Write Zeroes: Not Supported 00:07:35.591 Deallocated Guard Field: 0xFFFF 00:07:35.591 Flush: Supported 00:07:35.591 Reservation: Not Supported 00:07:35.591 Metadata Transferred as: Separate Metadata Buffer 00:07:35.591 Namespace Sharing Capabilities: Private 00:07:35.591 Size (in LBAs): 1548666 (5GiB) 00:07:35.591 Capacity (in LBAs): 1548666 (5GiB) 00:07:35.591 Utilization (in LBAs): 1548666 (5GiB) 00:07:35.591 Thin Provisioning: Not Supported 00:07:35.591 Per-NS Atomic Units: No 00:07:35.591 Maximum Single Source Range Length: 128 00:07:35.591 Maximum Copy Length: 128 00:07:35.591 Maximum Source Range Count: 128 00:07:35.591 NGUID/EUI64 Never Reused: No 00:07:35.591 Namespace Write Protected: No 00:07:35.591 Number of LBA Formats: 8 00:07:35.591 Current LBA Format: [2024-11-19 19:57:09.166813] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62790 terminated unexpected 00:07:35.591 LBA Format #07 00:07:35.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.591 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.591 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.591 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.591 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.591 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.591 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.591 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.591 00:07:35.591 NVM Specific Namespace Data 00:07:35.591 =========================== 00:07:35.591 Logical Block Storage Tag Mask: 0 00:07:35.591 Protection Information Capabilities: 00:07:35.592 16b Guard Protection Information Storage Tag Support: No 00:07:35.592 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.592 Storage Tag Check Read Support: No 00:07:35.592 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.592 ===================================================== 00:07:35.592 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.592 ===================================================== 00:07:35.592 Controller Capabilities/Features 00:07:35.592 ================================ 00:07:35.592 Vendor ID: 1b36 00:07:35.592 Subsystem Vendor ID: 1af4 00:07:35.592 Serial Number: 12341 00:07:35.592 Model Number: QEMU NVMe Ctrl 00:07:35.592 Firmware Version: 8.0.0 00:07:35.592 Recommended Arb Burst: 6 00:07:35.592 IEEE OUI Identifier: 00 54 52 00:07:35.592 Multi-path I/O 00:07:35.592 May have multiple subsystem ports: No 00:07:35.592 May have multiple controllers: No 00:07:35.592 Associated with SR-IOV VF: No 00:07:35.592 Max Data Transfer Size: 524288 00:07:35.593 Max Number of Namespaces: 256 00:07:35.593 Max Number of I/O Queues: 64 00:07:35.593 NVMe Specification Version (VS): 1.4 00:07:35.593 NVMe Specification Version (Identify): 1.4 00:07:35.593 Maximum Queue Entries: 2048 00:07:35.593 Contiguous Queues Required: Yes 00:07:35.593 Arbitration Mechanisms Supported 00:07:35.593 Weighted Round Robin: Not Supported 00:07:35.593 Vendor Specific: Not Supported 00:07:35.593 Reset Timeout: 7500 ms 00:07:35.593 Doorbell Stride: 4 bytes 00:07:35.593 NVM Subsystem Reset: Not Supported 00:07:35.593 Command Sets Supported 00:07:35.593 NVM Command Set: Supported 00:07:35.593 Boot Partition: Not Supported 00:07:35.593 Memory Page Size Minimum: 4096 bytes 00:07:35.593 Memory Page Size Maximum: 65536 bytes 00:07:35.593 Persistent Memory Region: Not Supported 00:07:35.593 Optional Asynchronous Events Supported 00:07:35.593 Namespace Attribute Notices: Supported 00:07:35.593 Firmware Activation Notices: Not Supported 00:07:35.593 ANA Change Notices: Not Supported 00:07:35.593 PLE Aggregate Log Change Notices: Not Supported 00:07:35.593 LBA Status Info Alert Notices: Not Supported 00:07:35.593 EGE Aggregate Log Change Notices: Not Supported 00:07:35.593 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.593 Zone Descriptor Change Notices: Not Supported 00:07:35.593 Discovery Log Change Notices: Not Supported 00:07:35.593 Controller Attributes 00:07:35.593 128-bit Host Identifier: Not Supported 00:07:35.593 Non-Operational Permissive Mode: Not Supported 00:07:35.593 NVM Sets: Not Supported 00:07:35.593 Read Recovery Levels: Not Supported 00:07:35.594 Endurance Groups: Not Supported 00:07:35.594 Predictable Latency Mode: Not Supported 00:07:35.594 Traffic Based Keep ALive: Not Supported 00:07:35.594 Namespace Granularity: Not Supported 00:07:35.594 SQ Associations: Not Supported 00:07:35.594 UUID List: Not Supported 00:07:35.594 Multi-Domain Subsystem: Not Supported 00:07:35.594 Fixed Capacity Management: Not Supported 00:07:35.594 Variable Capacity Management: Not Supported 00:07:35.594 Delete Endurance Group: Not Supported 00:07:35.594 Delete NVM Set: Not Supported 00:07:35.594 Extended LBA Formats Supported: Supported 00:07:35.594 Flexible Data Placement Supported: Not Supported 00:07:35.594 00:07:35.594 Controller Memory Buffer Support 00:07:35.594 ================================ 00:07:35.594 Supported: No 00:07:35.594 00:07:35.594 Persistent Memory Region Support 00:07:35.594 ================================ 00:07:35.594 Supported: No 00:07:35.594 00:07:35.594 Admin Command Set Attributes 00:07:35.594 ============================ 00:07:35.594 Security Send/Receive: Not Supported 00:07:35.595 Format NVM: Supported 00:07:35.595 Firmware Activate/Download: Not Supported 00:07:35.595 Namespace Management: Supported 00:07:35.595 Device Self-Test: Not Supported 00:07:35.595 Directives: Supported 00:07:35.595 NVMe-MI: Not Supported 00:07:35.595 Virtualization Management: Not Supported 00:07:35.595 Doorbell Buffer Config: Supported 00:07:35.595 Get LBA Status Capability: Not Supported 00:07:35.595 Command & Feature Lockdown Capability: Not Supported 00:07:35.595 Abort Command Limit: 4 00:07:35.595 Async Event Request Limit: 4 00:07:35.595 Number of Firmware Slots: N/A 00:07:35.595 Firmware Slot 1 Read-Only: N/A 00:07:35.595 Firmware Activation Without Reset: N/A 00:07:35.595 Multiple Update Detection Support: N/A 00:07:35.595 Firmware Update Granularity: No Information Provided 00:07:35.595 Per-Namespace SMART Log: Yes 00:07:35.595 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.595 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:35.595 Command Effects Log Page: Supported 00:07:35.595 Get Log Page Extended Data: Supported 00:07:35.595 Telemetry Log Pages: Not Supported 00:07:35.595 Persistent Event Log Pages: Not Supported 00:07:35.596 Supported Log Pages Log Page: May Support 00:07:35.596 Commands Supported & Effects Log Page: Not Supported 00:07:35.596 Feature Identifiers & Effects Log Page:May Support 00:07:35.596 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.596 Data Area 4 for Telemetry Log: Not Supported 00:07:35.596 Error Log Page Entries Supported: 1 00:07:35.596 Keep Alive: Not Supported 00:07:35.596 00:07:35.596 NVM Command Set Attributes 00:07:35.596 ========================== 00:07:35.596 Submission Queue Entry Size 00:07:35.596 Max: 64 00:07:35.596 Min: 64 00:07:35.596 Completion Queue Entry Size 00:07:35.596 Max: 16 00:07:35.596 Min: 16 00:07:35.596 Number of Namespaces: 256 00:07:35.596 Compare Command: Supported 00:07:35.596 Write Uncorrectable Command: Not Supported 00:07:35.596 Dataset Management Command: Supported 00:07:35.596 Write Zeroes Command: Supported 00:07:35.596 Set Features Save Field: Supported 00:07:35.596 Reservations: Not Supported 00:07:35.596 Timestamp: Supported 00:07:35.596 Copy: Supported 00:07:35.596 Volatile Write Cache: Present 00:07:35.596 Atomic Write Unit (Normal): 1 00:07:35.596 Atomic Write Unit (PFail): 1 00:07:35.596 Atomic Compare & Write Unit: 1 00:07:35.596 Fused Compare & Write: Not Supported 00:07:35.596 Scatter-Gather List 00:07:35.596 SGL Command Set: Supported 00:07:35.596 SGL Keyed: Not Supported 00:07:35.596 SGL Bit Bucket Descriptor: Not Supported 00:07:35.596 SGL Metadata Pointer: Not Supported 00:07:35.596 Oversized SGL: Not Supported 00:07:35.596 SGL Metadata Address: Not Supported 00:07:35.596 SGL Offset: Not Supported 00:07:35.596 Transport SGL Data Block: Not Supported 00:07:35.596 Replay Protected Memory Block: Not Supported 00:07:35.596 00:07:35.596 Firmware Slot Information 00:07:35.596 ========================= 00:07:35.596 Active slot: 1 00:07:35.597 Slot 1 Firmware Revision: 1.0 00:07:35.597 00:07:35.597 00:07:35.597 Commands Supported and Effects 00:07:35.597 ============================== 00:07:35.597 Admin Commands 00:07:35.597 -------------- 00:07:35.597 Delete I/O Submission Queue (00h): Supported 00:07:35.597 Create I/O Submission Queue (01h): Supported 00:07:35.597 Get Log Page (02h): Supported 00:07:35.597 Delete I/O Completion Queue (04h): Supported 00:07:35.597 Create I/O Completion Queue (05h): Supported 00:07:35.597 Identify (06h): Supported 00:07:35.597 Abort (08h): Supported 00:07:35.597 Set Features (09h): Supported 00:07:35.597 Get Features (0Ah): Supported 00:07:35.597 Asynchronous Event Request (0Ch): Supported 00:07:35.597 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.597 Directive Send (19h): Supported 00:07:35.597 Directive Receive (1Ah): Supported 00:07:35.597 Virtualization Management (1Ch): Supported 00:07:35.597 Doorbell Buffer Config (7Ch): Supported 00:07:35.597 Format NVM (80h): Supported LBA-Change 00:07:35.597 I/O Commands 00:07:35.597 ------------ 00:07:35.597 Flush (00h): Supported LBA-Change 00:07:35.597 Write (01h): Supported LBA-Change 00:07:35.597 Read (02h): Supported 00:07:35.597 Compare (05h): Supported 00:07:35.597 Write Zeroes (08h): Supported LBA-Change 00:07:35.597 Dataset Management (09h): Supported LBA-Change 00:07:35.597 Unknown (0Ch): Supported 00:07:35.597 Unknown (12h): Supported 00:07:35.597 Copy (19h): Supported LBA-Change 00:07:35.597 Unknown (1Dh): Supported LBA-Change 00:07:35.597 00:07:35.598 Error Log 00:07:35.598 ========= 00:07:35.598 00:07:35.598 Arbitration 00:07:35.598 =========== 00:07:35.598 Arbitration Burst: no limit 00:07:35.598 00:07:35.598 Power Management 00:07:35.598 ================ 00:07:35.598 Number of Power States: 1 00:07:35.598 Current Power State: Power State #0 00:07:35.598 Power State #0: 00:07:35.598 Max Power: 25.00 W 00:07:35.598 Non-Operational State: Operational 00:07:35.598 Entry Latency: 16 microseconds 00:07:35.598 Exit Latency: 4 microseconds 00:07:35.598 Relative Read Throughput: 0 00:07:35.598 Relative Read Latency: 0 00:07:35.598 Relative Write Throughput: 0 00:07:35.598 Relative Write Latency: 0 00:07:35.598 Idle Power: Not Reported 00:07:35.598 Active Power: Not Reported 00:07:35.598 Non-Operational Permissive Mode: Not Supported 00:07:35.598 00:07:35.598 Health Information 00:07:35.598 ================== 00:07:35.598 Critical Warnings: 00:07:35.598 Available Spare Space: OK 00:07:35.598 Temperature: OK 00:07:35.598 Device Reliability: OK 00:07:35.598 Read Only: No 00:07:35.598 Volatile Memory Backup: OK 00:07:35.598 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.599 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.599 Available Spare: 0% 00:07:35.599 Available Spare Threshold: 0% 00:07:35.599 Life Percentage Used: 0% 00:07:35.599 Data Units Read: 1022 00:07:35.599 Data Units Written: 882 00:07:35.599 Host Read Commands: 50999 00:07:35.599 Host Write Commands: 49683 00:07:35.599 Controller Busy Time: 0 minutes 00:07:35.599 Power Cycles: 0 00:07:35.599 Power On Hours: 0 hours 00:07:35.599 Unsafe Shutdowns: 0 00:07:35.599 Unrecoverable Media Errors: 0 00:07:35.599 Lifetime Error Log Entries: 0 00:07:35.599 Warning Temperature Time: 0 minutes 00:07:35.599 Critical Temperature Time: 0 minutes 00:07:35.599 00:07:35.599 Number of Queues 00:07:35.599 ================ 00:07:35.599 Number of I/O Submission Queues: 64 00:07:35.599 Number of I/O Completion Queues: 64 00:07:35.599 00:07:35.599 ZNS Specific Controller Data 00:07:35.599 ============================ 00:07:35.599 Zone Append Size Limit: 0 00:07:35.599 00:07:35.599 00:07:35.599 Active Namespaces 00:07:35.599 ================= 00:07:35.599 Namespace ID:1 00:07:35.599 Error Recovery Timeout: Unlimited 00:07:35.599 Command Set Identifier: NVM (00h) 00:07:35.599 Deallocate: Supported 00:07:35.599 Deallocated/Unwritten Error: Supported 00:07:35.599 Deallocated Read Value: All 0x00 00:07:35.599 Deallocate in Write Zeroes: Not Supported 00:07:35.599 Deallocated Guard Field: 0xFFFF 00:07:35.599 Flush: Supported 00:07:35.599 Reservation: Not Supported 00:07:35.599 Namespace Sharing Capabilities: Private 00:07:35.599 Size (in LBAs): 1310720 (5GiB) 00:07:35.599 Capacity (in LBAs): 1310720 (5GiB) 00:07:35.600 Utilization (in LBAs): 1310720 (5GiB) 00:07:35.600 Thin Provisioning: Not Supported 00:07:35.600 Per-NS Atomic Units: No 00:07:35.600 Maximum Single Source Range Length: 128 00:07:35.600 Maximum Copy Length: 128 00:07:35.600 Maximum Source Range Count: 128 00:07:35.600 NGUID/EUI64 Never Reused: No 00:07:35.600 Namespace Write Protected: No 00:07:35.600 Number of LBA Formats: 8 00:07:35.600 Current LBA Format: LBA Format #04 00:07:35.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.600 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.600 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.600 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.600 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.600 LBA Forma[2024-11-19 19:57:09.169647] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62790 terminated unexpected 00:07:35.600 t #05: Data Size: 4096 Metadata Size: 8 00:07:35.600 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.600 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.600 00:07:35.600 NVM Specific Namespace Data 00:07:35.600 =========================== 00:07:35.600 Logical Block Storage Tag Mask: 0 00:07:35.600 Protection Information Capabilities: 00:07:35.600 16b Guard Protection Information Storage Tag Support: No 00:07:35.600 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.600 Storage Tag Check Read Support: No 00:07:35.600 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.600 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.601 ===================================================== 00:07:35.601 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.601 ===================================================== 00:07:35.601 Controller Capabilities/Features 00:07:35.601 ================================ 00:07:35.601 Vendor ID: 1b36 00:07:35.601 Subsystem Vendor ID: 1af4 00:07:35.601 Serial Number: 12342 00:07:35.601 Model Number: QEMU NVMe Ctrl 00:07:35.601 Firmware Version: 8.0.0 00:07:35.601 Recommended Arb Burst: 6 00:07:35.601 IEEE OUI Identifier: 00 54 52 00:07:35.601 Multi-path I/O 00:07:35.601 May have multiple subsystem ports: No 00:07:35.601 May have multiple controllers: No 00:07:35.601 Associated with SR-IOV VF: No 00:07:35.601 Max Data Transfer Size: 524288 00:07:35.601 Max Number of Namespaces: 256 00:07:35.601 Max Number of I/O Queues: 64 00:07:35.601 NVMe Specification Version (VS): 1.4 00:07:35.601 NVMe Specification Version (Identify): 1.4 00:07:35.601 Maximum Queue Entries: 2048 00:07:35.601 Contiguous Queues Required: Yes 00:07:35.602 Arbitration Mechanisms Supported 00:07:35.602 Weighted Round Robin: Not Supported 00:07:35.602 Vendor Specific: Not Supported 00:07:35.602 Reset Timeout: 7500 ms 00:07:35.602 Doorbell Stride: 4 bytes 00:07:35.603 NVM Subsystem Reset: Not Supported 00:07:35.603 Command Sets Supported 00:07:35.603 NVM Command Set: Supported 00:07:35.603 Boot Partition: Not Supported 00:07:35.603 Memory Page Size Minimum: 4096 bytes 00:07:35.603 Memory Page Size Maximum: 65536 bytes 00:07:35.603 Persistent Memory Region: Not Supported 00:07:35.603 Optional Asynchronous Events Supported 00:07:35.603 Namespace Attribute Notices: Supported 00:07:35.603 Firmware Activation Notices: Not Supported 00:07:35.603 ANA Change Notices: Not Supported 00:07:35.603 PLE Aggregate Log Change Notices: Not Supported 00:07:35.603 LBA Status Info Alert Notices: Not Supported 00:07:35.603 EGE Aggregate Log Change Notices: Not Supported 00:07:35.603 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.603 Zone Descriptor Change Notices: Not Supported 00:07:35.603 Discovery Log Change Notices: Not Supported 00:07:35.603 Controller Attributes 00:07:35.603 128-bit Host Identifier: Not Supported 00:07:35.603 Non-Operational Permissive Mode: Not Supported 00:07:35.603 NVM Sets: Not Supported 00:07:35.603 Read Recovery Levels: Not Supported 00:07:35.603 Endurance Groups: Not Supported 00:07:35.603 Predictable Latency Mode: Not Supported 00:07:35.603 Traffic Based Keep ALive: Not Supported 00:07:35.603 Namespace Granularity: Not Supported 00:07:35.603 SQ Associations: Not Supported 00:07:35.603 UUID List: Not Supported 00:07:35.604 Multi-Domain Subsystem: Not Supported 00:07:35.604 Fixed Capacity Management: Not Supported 00:07:35.604 Variable Capacity Management: Not Supported 00:07:35.604 Delete Endurance Group: Not Supported 00:07:35.604 Delete NVM Set: Not Supported 00:07:35.604 Extended LBA Formats Supported: Supported 00:07:35.604 Flexible Data Placement Supported: Not Supported 00:07:35.604 00:07:35.604 Controller Memory Buffer Support 00:07:35.604 ================================ 00:07:35.604 Supported: No 00:07:35.604 00:07:35.604 Persistent Memory Region Support 00:07:35.604 ================================ 00:07:35.604 Supported: No 00:07:35.604 00:07:35.604 Admin Command Set Attributes 00:07:35.604 ============================ 00:07:35.604 Security Send/Receive: Not Supported 00:07:35.604 Format NVM: Supported 00:07:35.604 Firmware Activate/Download: Not Supported 00:07:35.604 Namespace Management: Supported 00:07:35.604 Device Self-Test: Not Supported 00:07:35.604 Directives: Supported 00:07:35.604 NVMe-MI: Not Supported 00:07:35.604 Virtualization Management: Not Supported 00:07:35.604 Doorbell Buffer Config: Supported 00:07:35.604 Get LBA Status Capability: Not Supported 00:07:35.604 Command & Feature Lockdown Capability: Not Supported 00:07:35.604 Abort Command Limit: 4 00:07:35.604 Async Event Request Limit: 4 00:07:35.604 Number of Firmware Slots: N/A 00:07:35.604 Firmware Slot 1 Read-Only: N/A 00:07:35.604 Firmware Activation Without Reset: N/A 00:07:35.604 Multiple Update Detection Support: N/A 00:07:35.604 Firmware Update Granularity: No Information Provided 00:07:35.604 Per-Namespace SMART Log: Yes 00:07:35.604 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.604 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:35.604 Command Effects Log Page: Supported 00:07:35.604 Get Log Page Extended Data: Supported 00:07:35.605 Telemetry Log Pages: Not Supported 00:07:35.605 Persistent Event Log Pages: Not Supported 00:07:35.605 Supported Log Pages Log Page: May Support 00:07:35.605 Commands Supported & Effects Log Page: Not Supported 00:07:35.605 Feature Identifiers & Effects Log Page:May Support 00:07:35.605 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.605 Data Area 4 for Telemetry Log: Not Supported 00:07:35.605 Error Log Page Entries Supported: 1 00:07:35.605 Keep Alive: Not Supported 00:07:35.605 00:07:35.605 NVM Command Set Attributes 00:07:35.605 ========================== 00:07:35.605 Submission Queue Entry Size 00:07:35.605 Max: 64 00:07:35.605 Min: 64 00:07:35.605 Completion Queue Entry Size 00:07:35.605 Max: 16 00:07:35.605 Min: 16 00:07:35.605 Number of Namespaces: 256 00:07:35.605 Compare Command: Supported 00:07:35.605 Write Uncorrectable Command: Not Supported 00:07:35.605 Dataset Management Command: Supported 00:07:35.605 Write Zeroes Command: Supported 00:07:35.605 Set Features Save Field: Supported 00:07:35.605 Reservations: Not Supported 00:07:35.605 Timestamp: Supported 00:07:35.605 Copy: Supported 00:07:35.605 Volatile Write Cache: Present 00:07:35.605 Atomic Write Unit (Normal): 1 00:07:35.605 Atomic Write Unit (PFail): 1 00:07:35.605 Atomic Compare & Write Unit: 1 00:07:35.605 Fused Compare & Write: Not Supported 00:07:35.605 Scatter-Gather List 00:07:35.605 SGL Command Set: Supported 00:07:35.605 SGL Keyed: Not Supported 00:07:35.605 SGL Bit Bucket Descriptor: Not Supported 00:07:35.605 SGL Metadata Pointer: Not Supported 00:07:35.605 Oversized SGL: Not Supported 00:07:35.605 SGL Metadata Address: Not Supported 00:07:35.605 SGL Offset: Not Supported 00:07:35.605 Transport SGL Data Block: Not Supported 00:07:35.605 Replay Protected Memory Block: Not Supported 00:07:35.605 00:07:35.605 Firmware Slot Information 00:07:35.605 ========================= 00:07:35.605 Active slot: 1 00:07:35.605 Slot 1 Firmware Revision: 1.0 00:07:35.605 00:07:35.605 00:07:35.605 Commands Supported and Effects 00:07:35.605 ============================== 00:07:35.605 Admin Commands 00:07:35.605 -------------- 00:07:35.605 Delete I/O Submission Queue (00h): Supported 00:07:35.605 Create I/O Submission Queue (01h): Supported 00:07:35.605 Get Log Page (02h): Supported 00:07:35.605 Delete I/O Completion Queue (04h): Supported 00:07:35.605 Create I/O Completion Queue (05h): Supported 00:07:35.605 Identify (06h): Supported 00:07:35.605 Abort (08h): Supported 00:07:35.605 Set Features (09h): Supported 00:07:35.605 Get Features (0Ah): Supported 00:07:35.605 Asynchronous Event Request (0Ch): Supported 00:07:35.605 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.605 Directive Send (19h): Supported 00:07:35.605 Directive Receive (1Ah): Supported 00:07:35.605 Virtualization Management (1Ch): Supported 00:07:35.605 Doorbell Buffer Config (7Ch): Supported 00:07:35.605 Format NVM (80h): Supported LBA-Change 00:07:35.605 I/O Commands 00:07:35.605 ------------ 00:07:35.605 Flush (00h): Supported LBA-Change 00:07:35.605 Write (01h): Supported LBA-Change 00:07:35.605 Read (02h): Supported 00:07:35.605 Compare (05h): Supported 00:07:35.605 Write Zeroes (08h): Supported LBA-Change 00:07:35.605 Dataset Management (09h): Supported LBA-Change 00:07:35.605 Unknown (0Ch): Supported 00:07:35.605 Unknown (12h): Supported 00:07:35.605 Copy (19h): Supported LBA-Change 00:07:35.605 Unknown (1Dh): Supported LBA-Change 00:07:35.605 00:07:35.605 Error Log 00:07:35.605 ========= 00:07:35.605 00:07:35.605 Arbitration 00:07:35.605 =========== 00:07:35.605 Arbitration Burst: no limit 00:07:35.605 00:07:35.605 Power Management 00:07:35.605 ================ 00:07:35.605 Number of Power States: 1 00:07:35.605 Current Power State: Power State #0 00:07:35.605 Power State #0: 00:07:35.605 Max Power: 25.00 W 00:07:35.605 Non-Operational State: Operational 00:07:35.605 Entry Latency: 16 microseconds 00:07:35.605 Exit Latency: 4 microseconds 00:07:35.605 Relative Read Throughput: 0 00:07:35.605 Relative Read Latency: 0 00:07:35.605 Relative Write Throughput: 0 00:07:35.605 Relative Write Latency: 0 00:07:35.605 Idle Power: Not Reported 00:07:35.605 Active Power: Not Reported 00:07:35.605 Non-Operational Permissive Mode: Not Supported 00:07:35.605 00:07:35.605 Health Information 00:07:35.605 ================== 00:07:35.605 Critical Warnings: 00:07:35.605 Available Spare Space: OK 00:07:35.605 Temperature: OK 00:07:35.605 Device Reliability: OK 00:07:35.605 Read Only: No 00:07:35.605 Volatile Memory Backup: OK 00:07:35.605 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.605 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.605 Available Spare: 0% 00:07:35.605 Available Spare Threshold: 0% 00:07:35.605 Life Percentage Used: 0% 00:07:35.605 Data Units Read: 2124 00:07:35.605 Data Units Written: 1911 00:07:35.606 Host Read Commands: 103774 00:07:35.606 Host Write Commands: 102044 00:07:35.606 Controller Busy Time: 0 minutes 00:07:35.606 Power Cycles: 0 00:07:35.606 Power On Hours: 0 hours 00:07:35.606 Unsafe Shutdowns: 0 00:07:35.606 Unrecoverable Media Errors: 0 00:07:35.606 Lifetime Error Log Entries: 0 00:07:35.606 Warning Temperature Time: 0 minutes 00:07:35.606 Critical Temperature Time: 0 minutes 00:07:35.606 00:07:35.606 Number of Queues 00:07:35.606 ================ 00:07:35.606 Number of I/O Submission Queues: 64 00:07:35.606 Number of I/O Completion Queues: 64 00:07:35.606 00:07:35.606 ZNS Specific Controller Data 00:07:35.606 ============================ 00:07:35.606 Zone Append Size Limit: 0 00:07:35.606 00:07:35.606 00:07:35.606 Active Namespaces 00:07:35.606 ================= 00:07:35.606 Namespace ID:1 00:07:35.606 Error Recovery Timeout: Unlimited 00:07:35.606 Command Set Identifier: NVM (00h) 00:07:35.606 Deallocate: Supported 00:07:35.606 Deallocated/Unwritten Error: Supported 00:07:35.606 Deallocated Read Value: All 0x00 00:07:35.606 Deallocate in Write Zeroes: Not Supported 00:07:35.606 Deallocated Guard Field: 0xFFFF 00:07:35.606 Flush: Supported 00:07:35.606 Reservation: Not Supported 00:07:35.606 Namespace Sharing Capabilities: Private 00:07:35.606 Size (in LBAs): 1048576 (4GiB) 00:07:35.606 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.606 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.606 Thin Provisioning: Not Supported 00:07:35.606 Per-NS Atomic Units: No 00:07:35.606 Maximum Single Source Range Length: 128 00:07:35.606 Maximum Copy Length: 128 00:07:35.606 Maximum Source Range Count: 128 00:07:35.606 NGUID/EUI64 Never Reused: No 00:07:35.606 Namespace Write Protected: No 00:07:35.606 Number of LBA Formats: 8 00:07:35.606 Current LBA Format: LBA Format #04 00:07:35.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.606 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.606 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.606 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.606 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.606 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.606 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.606 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.606 00:07:35.606 NVM Specific Namespace Data 00:07:35.606 =========================== 00:07:35.606 Logical Block Storage Tag Mask: 0 00:07:35.606 Protection Information Capabilities: 00:07:35.606 16b Guard Protection Information Storage Tag Support: No 00:07:35.606 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.606 Storage Tag Check Read Support: No 00:07:35.606 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Namespace ID:2 00:07:35.606 Error Recovery Timeout: Unlimited 00:07:35.606 Command Set Identifier: NVM (00h) 00:07:35.606 Deallocate: Supported 00:07:35.606 Deallocated/Unwritten Error: Supported 00:07:35.606 Deallocated Read Value: All 0x00 00:07:35.606 Deallocate in Write Zeroes: Not Supported 00:07:35.606 Deallocated Guard Field: 0xFFFF 00:07:35.606 Flush: Supported 00:07:35.606 Reservation: Not Supported 00:07:35.606 Namespace Sharing Capabilities: Private 00:07:35.606 Size (in LBAs): 1048576 (4GiB) 00:07:35.606 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.606 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.606 Thin Provisioning: Not Supported 00:07:35.606 Per-NS Atomic Units: No 00:07:35.606 Maximum Single Source Range Length: 128 00:07:35.606 Maximum Copy Length: 128 00:07:35.606 Maximum Source Range Count: 128 00:07:35.606 NGUID/EUI64 Never Reused: No 00:07:35.606 Namespace Write Protected: No 00:07:35.606 Number of LBA Formats: 8 00:07:35.606 Current LBA Format: LBA Format #04 00:07:35.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.606 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.606 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.606 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.606 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.606 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.606 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.606 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.606 00:07:35.606 NVM Specific Namespace Data 00:07:35.606 =========================== 00:07:35.606 Logical Block Storage Tag Mask: 0 00:07:35.606 Protection Information Capabilities: 00:07:35.606 16b Guard Protection Information Storage Tag Support: No 00:07:35.606 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.606 Storage Tag Check Read Support: No 00:07:35.606 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Namespace ID:3 00:07:35.606 Error Recovery Timeout: Unlimited 00:07:35.606 Command Set Identifier: NVM (00h) 00:07:35.606 Deallocate: Supported 00:07:35.606 Deallocated/Unwritten Error: Supported 00:07:35.606 Deallocated Read Value: All 0x00 00:07:35.606 Deallocate in Write Zeroes: Not Supported 00:07:35.606 Deallocated Guard Field: 0xFFFF 00:07:35.606 Flush: Supported 00:07:35.606 Reservation: Not Supported 00:07:35.606 Namespace Sharing Capabilities: Private 00:07:35.606 Size (in LBAs): 1048576 (4GiB) 00:07:35.606 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.606 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.606 Thin Provisioning: Not Supported 00:07:35.606 Per-NS Atomic Units: No 00:07:35.606 Maximum Single Source Range Length: 128 00:07:35.606 Maximum Copy Length: 128 00:07:35.606 Maximum Source Range Count: 128 00:07:35.606 NGUID/EUI64 Never Reused: No 00:07:35.606 Namespace Write Protected: No 00:07:35.606 Number of LBA Formats: 8 00:07:35.606 Current LBA Format: LBA Format #04 00:07:35.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.606 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.606 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.606 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.606 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.606 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.606 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.606 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.606 00:07:35.606 NVM Specific Namespace Data 00:07:35.606 =========================== 00:07:35.606 Logical Block Storage Tag Mask: 0 00:07:35.606 Protection Information Capabilities: 00:07:35.606 16b Guard Protection Information Storage Tag Support: No 00:07:35.606 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.606 Storage Tag Check Read Support: No 00:07:35.606 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.606 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:35.606 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:35.868 ===================================================== 00:07:35.868 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.868 ===================================================== 00:07:35.868 Controller Capabilities/Features 00:07:35.868 ================================ 00:07:35.868 Vendor ID: 1b36 00:07:35.868 Subsystem Vendor ID: 1af4 00:07:35.868 Serial Number: 12340 00:07:35.868 Model Number: QEMU NVMe Ctrl 00:07:35.868 Firmware Version: 8.0.0 00:07:35.868 Recommended Arb Burst: 6 00:07:35.868 IEEE OUI Identifier: 00 54 52 00:07:35.868 Multi-path I/O 00:07:35.868 May have multiple subsystem ports: No 00:07:35.868 May have multiple controllers: No 00:07:35.868 Associated with SR-IOV VF: No 00:07:35.868 Max Data Transfer Size: 524288 00:07:35.868 Max Number of Namespaces: 256 00:07:35.868 Max Number of I/O Queues: 64 00:07:35.868 NVMe Specification Version (VS): 1.4 00:07:35.868 NVMe Specification Version (Identify): 1.4 00:07:35.868 Maximum Queue Entries: 2048 00:07:35.868 Contiguous Queues Required: Yes 00:07:35.868 Arbitration Mechanisms Supported 00:07:35.868 Weighted Round Robin: Not Supported 00:07:35.868 Vendor Specific: Not Supported 00:07:35.868 Reset Timeout: 7500 ms 00:07:35.868 Doorbell Stride: 4 bytes 00:07:35.868 NVM Subsystem Reset: Not Supported 00:07:35.868 Command Sets Supported 00:07:35.868 NVM Command Set: Supported 00:07:35.868 Boot Partition: Not Supported 00:07:35.868 Memory Page Size Minimum: 4096 bytes 00:07:35.868 Memory Page Size Maximum: 65536 bytes 00:07:35.868 Persistent Memory Region: Not Supported 00:07:35.868 Optional Asynchronous Events Supported 00:07:35.868 Namespace Attribute Notices: Supported 00:07:35.868 Firmware Activation Notices: Not Supported 00:07:35.868 ANA Change Notices: Not Supported 00:07:35.868 PLE Aggregate Log Change Notices: Not Supported 00:07:35.868 LBA Status Info Alert Notices: Not Supported 00:07:35.868 EGE Aggregate Log Change Notices: Not Supported 00:07:35.868 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.868 Zone Descriptor Change Notices: Not Supported 00:07:35.868 Discovery Log Change Notices: Not Supported 00:07:35.868 Controller Attributes 00:07:35.868 128-bit Host Identifier: Not Supported 00:07:35.868 Non-Operational Permissive Mode: Not Supported 00:07:35.868 NVM Sets: Not Supported 00:07:35.868 Read Recovery Levels: Not Supported 00:07:35.868 Endurance Groups: Not Supported 00:07:35.868 Predictable Latency Mode: Not Supported 00:07:35.868 Traffic Based Keep ALive: Not Supported 00:07:35.868 Namespace Granularity: Not Supported 00:07:35.868 SQ Associations: Not Supported 00:07:35.868 UUID List: Not Supported 00:07:35.868 Multi-Domain Subsystem: Not Supported 00:07:35.868 Fixed Capacity Management: Not Supported 00:07:35.868 Variable Capacity Management: Not Supported 00:07:35.868 Delete Endurance Group: Not Supported 00:07:35.868 Delete NVM Set: Not Supported 00:07:35.868 Extended LBA Formats Supported: Supported 00:07:35.868 Flexible Data Placement Supported: Not Supported 00:07:35.868 00:07:35.868 Controller Memory Buffer Support 00:07:35.868 ================================ 00:07:35.868 Supported: No 00:07:35.868 00:07:35.868 Persistent Memory Region Support 00:07:35.868 ================================ 00:07:35.868 Supported: No 00:07:35.868 00:07:35.868 Admin Command Set Attributes 00:07:35.868 ============================ 00:07:35.868 Security Send/Receive: Not Supported 00:07:35.868 Format NVM: Supported 00:07:35.868 Firmware Activate/Download: Not Supported 00:07:35.868 Namespace Management: Supported 00:07:35.868 Device Self-Test: Not Supported 00:07:35.868 Directives: Supported 00:07:35.868 NVMe-MI: Not Supported 00:07:35.868 Virtualization Management: Not Supported 00:07:35.868 Doorbell Buffer Config: Supported 00:07:35.868 Get LBA Status Capability: Not Supported 00:07:35.868 Command & Feature Lockdown Capability: Not Supported 00:07:35.868 Abort Command Limit: 4 00:07:35.868 Async Event Request Limit: 4 00:07:35.868 Number of Firmware Slots: N/A 00:07:35.868 Firmware Slot 1 Read-Only: N/A 00:07:35.868 Firmware Activation Without Reset: N/A 00:07:35.868 Multiple Update Detection Support: N/A 00:07:35.868 Firmware Update Granularity: No Information Provided 00:07:35.868 Per-Namespace SMART Log: Yes 00:07:35.868 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.868 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:35.868 Command Effects Log Page: Supported 00:07:35.868 Get Log Page Extended Data: Supported 00:07:35.868 Telemetry Log Pages: Not Supported 00:07:35.868 Persistent Event Log Pages: Not Supported 00:07:35.868 Supported Log Pages Log Page: May Support 00:07:35.868 Commands Supported & Effects Log Page: Not Supported 00:07:35.868 Feature Identifiers & Effects Log Page:May Support 00:07:35.868 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.868 Data Area 4 for Telemetry Log: Not Supported 00:07:35.868 Error Log Page Entries Supported: 1 00:07:35.868 Keep Alive: Not Supported 00:07:35.868 00:07:35.868 NVM Command Set Attributes 00:07:35.868 ========================== 00:07:35.868 Submission Queue Entry Size 00:07:35.868 Max: 64 00:07:35.868 Min: 64 00:07:35.868 Completion Queue Entry Size 00:07:35.868 Max: 16 00:07:35.868 Min: 16 00:07:35.868 Number of Namespaces: 256 00:07:35.868 Compare Command: Supported 00:07:35.868 Write Uncorrectable Command: Not Supported 00:07:35.868 Dataset Management Command: Supported 00:07:35.868 Write Zeroes Command: Supported 00:07:35.868 Set Features Save Field: Supported 00:07:35.868 Reservations: Not Supported 00:07:35.868 Timestamp: Supported 00:07:35.868 Copy: Supported 00:07:35.868 Volatile Write Cache: Present 00:07:35.868 Atomic Write Unit (Normal): 1 00:07:35.868 Atomic Write Unit (PFail): 1 00:07:35.868 Atomic Compare & Write Unit: 1 00:07:35.868 Fused Compare & Write: Not Supported 00:07:35.868 Scatter-Gather List 00:07:35.868 SGL Command Set: Supported 00:07:35.868 SGL Keyed: Not Supported 00:07:35.868 SGL Bit Bucket Descriptor: Not Supported 00:07:35.868 SGL Metadata Pointer: Not Supported 00:07:35.868 Oversized SGL: Not Supported 00:07:35.868 SGL Metadata Address: Not Supported 00:07:35.868 SGL Offset: Not Supported 00:07:35.868 Transport SGL Data Block: Not Supported 00:07:35.868 Replay Protected Memory Block: Not Supported 00:07:35.868 00:07:35.868 Firmware Slot Information 00:07:35.868 ========================= 00:07:35.868 Active slot: 1 00:07:35.868 Slot 1 Firmware Revision: 1.0 00:07:35.868 00:07:35.868 00:07:35.868 Commands Supported and Effects 00:07:35.868 ============================== 00:07:35.868 Admin Commands 00:07:35.868 -------------- 00:07:35.868 Delete I/O Submission Queue (00h): Supported 00:07:35.868 Create I/O Submission Queue (01h): Supported 00:07:35.868 Get Log Page (02h): Supported 00:07:35.868 Delete I/O Completion Queue (04h): Supported 00:07:35.868 Create I/O Completion Queue (05h): Supported 00:07:35.868 Identify (06h): Supported 00:07:35.868 Abort (08h): Supported 00:07:35.868 Set Features (09h): Supported 00:07:35.868 Get Features (0Ah): Supported 00:07:35.868 Asynchronous Event Request (0Ch): Supported 00:07:35.868 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.868 Directive Send (19h): Supported 00:07:35.868 Directive Receive (1Ah): Supported 00:07:35.868 Virtualization Management (1Ch): Supported 00:07:35.868 Doorbell Buffer Config (7Ch): Supported 00:07:35.868 Format NVM (80h): Supported LBA-Change 00:07:35.868 I/O Commands 00:07:35.868 ------------ 00:07:35.868 Flush (00h): Supported LBA-Change 00:07:35.868 Write (01h): Supported LBA-Change 00:07:35.868 Read (02h): Supported 00:07:35.868 Compare (05h): Supported 00:07:35.868 Write Zeroes (08h): Supported LBA-Change 00:07:35.868 Dataset Management (09h): Supported LBA-Change 00:07:35.868 Unknown (0Ch): Supported 00:07:35.868 Unknown (12h): Supported 00:07:35.868 Copy (19h): Supported LBA-Change 00:07:35.868 Unknown (1Dh): Supported LBA-Change 00:07:35.868 00:07:35.868 Error Log 00:07:35.868 ========= 00:07:35.868 00:07:35.868 Arbitration 00:07:35.868 =========== 00:07:35.868 Arbitration Burst: no limit 00:07:35.868 00:07:35.868 Power Management 00:07:35.868 ================ 00:07:35.868 Number of Power States: 1 00:07:35.868 Current Power State: Power State #0 00:07:35.868 Power State #0: 00:07:35.868 Max Power: 25.00 W 00:07:35.869 Non-Operational State: Operational 00:07:35.869 Entry Latency: 16 microseconds 00:07:35.869 Exit Latency: 4 microseconds 00:07:35.869 Relative Read Throughput: 0 00:07:35.869 Relative Read Latency: 0 00:07:35.869 Relative Write Throughput: 0 00:07:35.869 Relative Write Latency: 0 00:07:35.869 Idle Power: Not Reported 00:07:35.869 Active Power: Not Reported 00:07:35.869 Non-Operational Permissive Mode: Not Supported 00:07:35.869 00:07:35.869 Health Information 00:07:35.869 ================== 00:07:35.869 Critical Warnings: 00:07:35.869 Available Spare Space: OK 00:07:35.869 Temperature: OK 00:07:35.869 Device Reliability: OK 00:07:35.869 Read Only: No 00:07:35.869 Volatile Memory Backup: OK 00:07:35.869 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.869 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.869 Available Spare: 0% 00:07:35.869 Available Spare Threshold: 0% 00:07:35.869 Life Percentage Used: 0% 00:07:35.869 Data Units Read: 656 00:07:35.869 Data Units Written: 584 00:07:35.869 Host Read Commands: 33946 00:07:35.869 Host Write Commands: 33732 00:07:35.869 Controller Busy Time: 0 minutes 00:07:35.869 Power Cycles: 0 00:07:35.869 Power On Hours: 0 hours 00:07:35.869 Unsafe Shutdowns: 0 00:07:35.869 Unrecoverable Media Errors: 0 00:07:35.869 Lifetime Error Log Entries: 0 00:07:35.869 Warning Temperature Time: 0 minutes 00:07:35.869 Critical Temperature Time: 0 minutes 00:07:35.869 00:07:35.869 Number of Queues 00:07:35.869 ================ 00:07:35.869 Number of I/O Submission Queues: 64 00:07:35.869 Number of I/O Completion Queues: 64 00:07:35.869 00:07:35.869 ZNS Specific Controller Data 00:07:35.869 ============================ 00:07:35.869 Zone Append Size Limit: 0 00:07:35.869 00:07:35.869 00:07:35.869 Active Namespaces 00:07:35.869 ================= 00:07:35.869 Namespace ID:1 00:07:35.869 Error Recovery Timeout: Unlimited 00:07:35.869 Command Set Identifier: NVM (00h) 00:07:35.869 Deallocate: Supported 00:07:35.869 Deallocated/Unwritten Error: Supported 00:07:35.869 Deallocated Read Value: All 0x00 00:07:35.869 Deallocate in Write Zeroes: Not Supported 00:07:35.869 Deallocated Guard Field: 0xFFFF 00:07:35.869 Flush: Supported 00:07:35.869 Reservation: Not Supported 00:07:35.869 Metadata Transferred as: Separate Metadata Buffer 00:07:35.869 Namespace Sharing Capabilities: Private 00:07:35.869 Size (in LBAs): 1548666 (5GiB) 00:07:35.869 Capacity (in LBAs): 1548666 (5GiB) 00:07:35.869 Utilization (in LBAs): 1548666 (5GiB) 00:07:35.869 Thin Provisioning: Not Supported 00:07:35.869 Per-NS Atomic Units: No 00:07:35.869 Maximum Single Source Range Length: 128 00:07:35.869 Maximum Copy Length: 128 00:07:35.869 Maximum Source Range Count: 128 00:07:35.869 NGUID/EUI64 Never Reused: No 00:07:35.869 Namespace Write Protected: No 00:07:35.869 Number of LBA Formats: 8 00:07:35.869 Current LBA Format: LBA Format #07 00:07:35.869 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.869 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.869 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.869 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.869 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.869 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.869 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.869 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.869 00:07:35.869 NVM Specific Namespace Data 00:07:35.869 =========================== 00:07:35.869 Logical Block Storage Tag Mask: 0 00:07:35.869 Protection Information Capabilities: 00:07:35.869 16b Guard Protection Information Storage Tag Support: No 00:07:35.869 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.869 Storage Tag Check Read Support: No 00:07:35.869 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.869 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:35.869 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:36.131 ===================================================== 00:07:36.131 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.131 ===================================================== 00:07:36.131 Controller Capabilities/Features 00:07:36.131 ================================ 00:07:36.131 Vendor ID: 1b36 00:07:36.131 Subsystem Vendor ID: 1af4 00:07:36.131 Serial Number: 12341 00:07:36.131 Model Number: QEMU NVMe Ctrl 00:07:36.131 Firmware Version: 8.0.0 00:07:36.131 Recommended Arb Burst: 6 00:07:36.131 IEEE OUI Identifier: 00 54 52 00:07:36.131 Multi-path I/O 00:07:36.131 May have multiple subsystem ports: No 00:07:36.131 May have multiple controllers: No 00:07:36.131 Associated with SR-IOV VF: No 00:07:36.131 Max Data Transfer Size: 524288 00:07:36.131 Max Number of Namespaces: 256 00:07:36.131 Max Number of I/O Queues: 64 00:07:36.131 NVMe Specification Version (VS): 1.4 00:07:36.131 NVMe Specification Version (Identify): 1.4 00:07:36.131 Maximum Queue Entries: 2048 00:07:36.131 Contiguous Queues Required: Yes 00:07:36.131 Arbitration Mechanisms Supported 00:07:36.131 Weighted Round Robin: Not Supported 00:07:36.131 Vendor Specific: Not Supported 00:07:36.131 Reset Timeout: 7500 ms 00:07:36.131 Doorbell Stride: 4 bytes 00:07:36.131 NVM Subsystem Reset: Not Supported 00:07:36.131 Command Sets Supported 00:07:36.131 NVM Command Set: Supported 00:07:36.131 Boot Partition: Not Supported 00:07:36.131 Memory Page Size Minimum: 4096 bytes 00:07:36.131 Memory Page Size Maximum: 65536 bytes 00:07:36.131 Persistent Memory Region: Not Supported 00:07:36.131 Optional Asynchronous Events Supported 00:07:36.131 Namespace Attribute Notices: Supported 00:07:36.131 Firmware Activation Notices: Not Supported 00:07:36.131 ANA Change Notices: Not Supported 00:07:36.131 PLE Aggregate Log Change Notices: Not Supported 00:07:36.131 LBA Status Info Alert Notices: Not Supported 00:07:36.131 EGE Aggregate Log Change Notices: Not Supported 00:07:36.131 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.131 Zone Descriptor Change Notices: Not Supported 00:07:36.131 Discovery Log Change Notices: Not Supported 00:07:36.131 Controller Attributes 00:07:36.131 128-bit Host Identifier: Not Supported 00:07:36.131 Non-Operational Permissive Mode: Not Supported 00:07:36.131 NVM Sets: Not Supported 00:07:36.131 Read Recovery Levels: Not Supported 00:07:36.131 Endurance Groups: Not Supported 00:07:36.131 Predictable Latency Mode: Not Supported 00:07:36.131 Traffic Based Keep ALive: Not Supported 00:07:36.131 Namespace Granularity: Not Supported 00:07:36.132 SQ Associations: Not Supported 00:07:36.132 UUID List: Not Supported 00:07:36.132 Multi-Domain Subsystem: Not Supported 00:07:36.132 Fixed Capacity Management: Not Supported 00:07:36.132 Variable Capacity Management: Not Supported 00:07:36.132 Delete Endurance Group: Not Supported 00:07:36.132 Delete NVM Set: Not Supported 00:07:36.132 Extended LBA Formats Supported: Supported 00:07:36.132 Flexible Data Placement Supported: Not Supported 00:07:36.132 00:07:36.132 Controller Memory Buffer Support 00:07:36.132 ================================ 00:07:36.132 Supported: No 00:07:36.132 00:07:36.132 Persistent Memory Region Support 00:07:36.132 ================================ 00:07:36.132 Supported: No 00:07:36.132 00:07:36.132 Admin Command Set Attributes 00:07:36.132 ============================ 00:07:36.132 Security Send/Receive: Not Supported 00:07:36.132 Format NVM: Supported 00:07:36.132 Firmware Activate/Download: Not Supported 00:07:36.132 Namespace Management: Supported 00:07:36.132 Device Self-Test: Not Supported 00:07:36.132 Directives: Supported 00:07:36.132 NVMe-MI: Not Supported 00:07:36.132 Virtualization Management: Not Supported 00:07:36.132 Doorbell Buffer Config: Supported 00:07:36.132 Get LBA Status Capability: Not Supported 00:07:36.132 Command & Feature Lockdown Capability: Not Supported 00:07:36.132 Abort Command Limit: 4 00:07:36.132 Async Event Request Limit: 4 00:07:36.132 Number of Firmware Slots: N/A 00:07:36.132 Firmware Slot 1 Read-Only: N/A 00:07:36.132 Firmware Activation Without Reset: N/A 00:07:36.132 Multiple Update Detection Support: N/A 00:07:36.132 Firmware Update Granularity: No Information Provided 00:07:36.132 Per-Namespace SMART Log: Yes 00:07:36.132 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.132 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:36.132 Command Effects Log Page: Supported 00:07:36.132 Get Log Page Extended Data: Supported 00:07:36.132 Telemetry Log Pages: Not Supported 00:07:36.132 Persistent Event Log Pages: Not Supported 00:07:36.132 Supported Log Pages Log Page: May Support 00:07:36.132 Commands Supported & Effects Log Page: Not Supported 00:07:36.132 Feature Identifiers & Effects Log Page:May Support 00:07:36.132 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.132 Data Area 4 for Telemetry Log: Not Supported 00:07:36.132 Error Log Page Entries Supported: 1 00:07:36.132 Keep Alive: Not Supported 00:07:36.132 00:07:36.132 NVM Command Set Attributes 00:07:36.132 ========================== 00:07:36.132 Submission Queue Entry Size 00:07:36.132 Max: 64 00:07:36.132 Min: 64 00:07:36.132 Completion Queue Entry Size 00:07:36.132 Max: 16 00:07:36.132 Min: 16 00:07:36.132 Number of Namespaces: 256 00:07:36.132 Compare Command: Supported 00:07:36.132 Write Uncorrectable Command: Not Supported 00:07:36.132 Dataset Management Command: Supported 00:07:36.132 Write Zeroes Command: Supported 00:07:36.132 Set Features Save Field: Supported 00:07:36.132 Reservations: Not Supported 00:07:36.132 Timestamp: Supported 00:07:36.132 Copy: Supported 00:07:36.132 Volatile Write Cache: Present 00:07:36.132 Atomic Write Unit (Normal): 1 00:07:36.132 Atomic Write Unit (PFail): 1 00:07:36.132 Atomic Compare & Write Unit: 1 00:07:36.132 Fused Compare & Write: Not Supported 00:07:36.132 Scatter-Gather List 00:07:36.132 SGL Command Set: Supported 00:07:36.132 SGL Keyed: Not Supported 00:07:36.132 SGL Bit Bucket Descriptor: Not Supported 00:07:36.132 SGL Metadata Pointer: Not Supported 00:07:36.132 Oversized SGL: Not Supported 00:07:36.132 SGL Metadata Address: Not Supported 00:07:36.132 SGL Offset: Not Supported 00:07:36.132 Transport SGL Data Block: Not Supported 00:07:36.132 Replay Protected Memory Block: Not Supported 00:07:36.132 00:07:36.132 Firmware Slot Information 00:07:36.132 ========================= 00:07:36.132 Active slot: 1 00:07:36.132 Slot 1 Firmware Revision: 1.0 00:07:36.132 00:07:36.132 00:07:36.132 Commands Supported and Effects 00:07:36.132 ============================== 00:07:36.132 Admin Commands 00:07:36.132 -------------- 00:07:36.132 Delete I/O Submission Queue (00h): Supported 00:07:36.132 Create I/O Submission Queue (01h): Supported 00:07:36.132 Get Log Page (02h): Supported 00:07:36.132 Delete I/O Completion Queue (04h): Supported 00:07:36.132 Create I/O Completion Queue (05h): Supported 00:07:36.132 Identify (06h): Supported 00:07:36.132 Abort (08h): Supported 00:07:36.132 Set Features (09h): Supported 00:07:36.132 Get Features (0Ah): Supported 00:07:36.132 Asynchronous Event Request (0Ch): Supported 00:07:36.132 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.132 Directive Send (19h): Supported 00:07:36.132 Directive Receive (1Ah): Supported 00:07:36.132 Virtualization Management (1Ch): Supported 00:07:36.132 Doorbell Buffer Config (7Ch): Supported 00:07:36.132 Format NVM (80h): Supported LBA-Change 00:07:36.132 I/O Commands 00:07:36.132 ------------ 00:07:36.132 Flush (00h): Supported LBA-Change 00:07:36.132 Write (01h): Supported LBA-Change 00:07:36.132 Read (02h): Supported 00:07:36.132 Compare (05h): Supported 00:07:36.132 Write Zeroes (08h): Supported LBA-Change 00:07:36.132 Dataset Management (09h): Supported LBA-Change 00:07:36.132 Unknown (0Ch): Supported 00:07:36.132 Unknown (12h): Supported 00:07:36.132 Copy (19h): Supported LBA-Change 00:07:36.132 Unknown (1Dh): Supported LBA-Change 00:07:36.132 00:07:36.132 Error Log 00:07:36.132 ========= 00:07:36.132 00:07:36.132 Arbitration 00:07:36.132 =========== 00:07:36.132 Arbitration Burst: no limit 00:07:36.132 00:07:36.132 Power Management 00:07:36.132 ================ 00:07:36.132 Number of Power States: 1 00:07:36.132 Current Power State: Power State #0 00:07:36.132 Power State #0: 00:07:36.132 Max Power: 25.00 W 00:07:36.132 Non-Operational State: Operational 00:07:36.132 Entry Latency: 16 microseconds 00:07:36.132 Exit Latency: 4 microseconds 00:07:36.132 Relative Read Throughput: 0 00:07:36.132 Relative Read Latency: 0 00:07:36.132 Relative Write Throughput: 0 00:07:36.132 Relative Write Latency: 0 00:07:36.132 Idle Power: Not Reported 00:07:36.132 Active Power: Not Reported 00:07:36.132 Non-Operational Permissive Mode: Not Supported 00:07:36.132 00:07:36.132 Health Information 00:07:36.132 ================== 00:07:36.132 Critical Warnings: 00:07:36.132 Available Spare Space: OK 00:07:36.132 Temperature: OK 00:07:36.132 Device Reliability: OK 00:07:36.132 Read Only: No 00:07:36.132 Volatile Memory Backup: OK 00:07:36.132 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.132 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.132 Available Spare: 0% 00:07:36.132 Available Spare Threshold: 0% 00:07:36.132 Life Percentage Used: 0% 00:07:36.132 Data Units Read: 1022 00:07:36.132 Data Units Written: 882 00:07:36.132 Host Read Commands: 50999 00:07:36.132 Host Write Commands: 49683 00:07:36.132 Controller Busy Time: 0 minutes 00:07:36.132 Power Cycles: 0 00:07:36.132 Power On Hours: 0 hours 00:07:36.132 Unsafe Shutdowns: 0 00:07:36.132 Unrecoverable Media Errors: 0 00:07:36.132 Lifetime Error Log Entries: 0 00:07:36.132 Warning Temperature Time: 0 minutes 00:07:36.132 Critical Temperature Time: 0 minutes 00:07:36.132 00:07:36.132 Number of Queues 00:07:36.132 ================ 00:07:36.132 Number of I/O Submission Queues: 64 00:07:36.132 Number of I/O Completion Queues: 64 00:07:36.132 00:07:36.132 ZNS Specific Controller Data 00:07:36.132 ============================ 00:07:36.132 Zone Append Size Limit: 0 00:07:36.132 00:07:36.132 00:07:36.132 Active Namespaces 00:07:36.132 ================= 00:07:36.132 Namespace ID:1 00:07:36.132 Error Recovery Timeout: Unlimited 00:07:36.132 Command Set Identifier: NVM (00h) 00:07:36.132 Deallocate: Supported 00:07:36.132 Deallocated/Unwritten Error: Supported 00:07:36.132 Deallocated Read Value: All 0x00 00:07:36.132 Deallocate in Write Zeroes: Not Supported 00:07:36.132 Deallocated Guard Field: 0xFFFF 00:07:36.132 Flush: Supported 00:07:36.132 Reservation: Not Supported 00:07:36.132 Namespace Sharing Capabilities: Private 00:07:36.132 Size (in LBAs): 1310720 (5GiB) 00:07:36.132 Capacity (in LBAs): 1310720 (5GiB) 00:07:36.132 Utilization (in LBAs): 1310720 (5GiB) 00:07:36.132 Thin Provisioning: Not Supported 00:07:36.132 Per-NS Atomic Units: No 00:07:36.132 Maximum Single Source Range Length: 128 00:07:36.132 Maximum Copy Length: 128 00:07:36.132 Maximum Source Range Count: 128 00:07:36.132 NGUID/EUI64 Never Reused: No 00:07:36.133 Namespace Write Protected: No 00:07:36.133 Number of LBA Formats: 8 00:07:36.133 Current LBA Format: LBA Format #04 00:07:36.133 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.133 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.133 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.133 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.133 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.133 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.133 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.133 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.133 00:07:36.133 NVM Specific Namespace Data 00:07:36.133 =========================== 00:07:36.133 Logical Block Storage Tag Mask: 0 00:07:36.133 Protection Information Capabilities: 00:07:36.133 16b Guard Protection Information Storage Tag Support: No 00:07:36.133 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.133 Storage Tag Check Read Support: No 00:07:36.133 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.133 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.133 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:36.394 ===================================================== 00:07:36.394 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.394 ===================================================== 00:07:36.394 Controller Capabilities/Features 00:07:36.394 ================================ 00:07:36.394 Vendor ID: 1b36 00:07:36.394 Subsystem Vendor ID: 1af4 00:07:36.394 Serial Number: 12342 00:07:36.394 Model Number: QEMU NVMe Ctrl 00:07:36.394 Firmware Version: 8.0.0 00:07:36.394 Recommended Arb Burst: 6 00:07:36.394 IEEE OUI Identifier: 00 54 52 00:07:36.394 Multi-path I/O 00:07:36.394 May have multiple subsystem ports: No 00:07:36.394 May have multiple controllers: No 00:07:36.394 Associated with SR-IOV VF: No 00:07:36.394 Max Data Transfer Size: 524288 00:07:36.394 Max Number of Namespaces: 256 00:07:36.394 Max Number of I/O Queues: 64 00:07:36.394 NVMe Specification Version (VS): 1.4 00:07:36.394 NVMe Specification Version (Identify): 1.4 00:07:36.394 Maximum Queue Entries: 2048 00:07:36.394 Contiguous Queues Required: Yes 00:07:36.394 Arbitration Mechanisms Supported 00:07:36.394 Weighted Round Robin: Not Supported 00:07:36.394 Vendor Specific: Not Supported 00:07:36.394 Reset Timeout: 7500 ms 00:07:36.394 Doorbell Stride: 4 bytes 00:07:36.394 NVM Subsystem Reset: Not Supported 00:07:36.394 Command Sets Supported 00:07:36.394 NVM Command Set: Supported 00:07:36.394 Boot Partition: Not Supported 00:07:36.394 Memory Page Size Minimum: 4096 bytes 00:07:36.394 Memory Page Size Maximum: 65536 bytes 00:07:36.394 Persistent Memory Region: Not Supported 00:07:36.394 Optional Asynchronous Events Supported 00:07:36.394 Namespace Attribute Notices: Supported 00:07:36.394 Firmware Activation Notices: Not Supported 00:07:36.394 ANA Change Notices: Not Supported 00:07:36.394 PLE Aggregate Log Change Notices: Not Supported 00:07:36.395 LBA Status Info Alert Notices: Not Supported 00:07:36.395 EGE Aggregate Log Change Notices: Not Supported 00:07:36.395 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.395 Zone Descriptor Change Notices: Not Supported 00:07:36.395 Discovery Log Change Notices: Not Supported 00:07:36.395 Controller Attributes 00:07:36.395 128-bit Host Identifier: Not Supported 00:07:36.395 Non-Operational Permissive Mode: Not Supported 00:07:36.395 NVM Sets: Not Supported 00:07:36.395 Read Recovery Levels: Not Supported 00:07:36.395 Endurance Groups: Not Supported 00:07:36.395 Predictable Latency Mode: Not Supported 00:07:36.395 Traffic Based Keep ALive: Not Supported 00:07:36.395 Namespace Granularity: Not Supported 00:07:36.395 SQ Associations: Not Supported 00:07:36.395 UUID List: Not Supported 00:07:36.395 Multi-Domain Subsystem: Not Supported 00:07:36.395 Fixed Capacity Management: Not Supported 00:07:36.395 Variable Capacity Management: Not Supported 00:07:36.395 Delete Endurance Group: Not Supported 00:07:36.395 Delete NVM Set: Not Supported 00:07:36.395 Extended LBA Formats Supported: Supported 00:07:36.395 Flexible Data Placement Supported: Not Supported 00:07:36.395 00:07:36.395 Controller Memory Buffer Support 00:07:36.395 ================================ 00:07:36.395 Supported: No 00:07:36.395 00:07:36.395 Persistent Memory Region Support 00:07:36.395 ================================ 00:07:36.395 Supported: No 00:07:36.395 00:07:36.395 Admin Command Set Attributes 00:07:36.395 ============================ 00:07:36.395 Security Send/Receive: Not Supported 00:07:36.395 Format NVM: Supported 00:07:36.395 Firmware Activate/Download: Not Supported 00:07:36.395 Namespace Management: Supported 00:07:36.395 Device Self-Test: Not Supported 00:07:36.395 Directives: Supported 00:07:36.395 NVMe-MI: Not Supported 00:07:36.395 Virtualization Management: Not Supported 00:07:36.395 Doorbell Buffer Config: Supported 00:07:36.395 Get LBA Status Capability: Not Supported 00:07:36.395 Command & Feature Lockdown Capability: Not Supported 00:07:36.395 Abort Command Limit: 4 00:07:36.395 Async Event Request Limit: 4 00:07:36.395 Number of Firmware Slots: N/A 00:07:36.395 Firmware Slot 1 Read-Only: N/A 00:07:36.395 Firmware Activation Without Reset: N/A 00:07:36.395 Multiple Update Detection Support: N/A 00:07:36.395 Firmware Update Granularity: No Information Provided 00:07:36.395 Per-Namespace SMART Log: Yes 00:07:36.395 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.395 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:36.395 Command Effects Log Page: Supported 00:07:36.395 Get Log Page Extended Data: Supported 00:07:36.395 Telemetry Log Pages: Not Supported 00:07:36.395 Persistent Event Log Pages: Not Supported 00:07:36.395 Supported Log Pages Log Page: May Support 00:07:36.395 Commands Supported & Effects Log Page: Not Supported 00:07:36.395 Feature Identifiers & Effects Log Page:May Support 00:07:36.395 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.395 Data Area 4 for Telemetry Log: Not Supported 00:07:36.395 Error Log Page Entries Supported: 1 00:07:36.395 Keep Alive: Not Supported 00:07:36.395 00:07:36.395 NVM Command Set Attributes 00:07:36.395 ========================== 00:07:36.395 Submission Queue Entry Size 00:07:36.395 Max: 64 00:07:36.395 Min: 64 00:07:36.395 Completion Queue Entry Size 00:07:36.395 Max: 16 00:07:36.395 Min: 16 00:07:36.395 Number of Namespaces: 256 00:07:36.395 Compare Command: Supported 00:07:36.395 Write Uncorrectable Command: Not Supported 00:07:36.395 Dataset Management Command: Supported 00:07:36.395 Write Zeroes Command: Supported 00:07:36.395 Set Features Save Field: Supported 00:07:36.395 Reservations: Not Supported 00:07:36.395 Timestamp: Supported 00:07:36.395 Copy: Supported 00:07:36.395 Volatile Write Cache: Present 00:07:36.395 Atomic Write Unit (Normal): 1 00:07:36.395 Atomic Write Unit (PFail): 1 00:07:36.395 Atomic Compare & Write Unit: 1 00:07:36.395 Fused Compare & Write: Not Supported 00:07:36.395 Scatter-Gather List 00:07:36.395 SGL Command Set: Supported 00:07:36.395 SGL Keyed: Not Supported 00:07:36.395 SGL Bit Bucket Descriptor: Not Supported 00:07:36.395 SGL Metadata Pointer: Not Supported 00:07:36.395 Oversized SGL: Not Supported 00:07:36.395 SGL Metadata Address: Not Supported 00:07:36.395 SGL Offset: Not Supported 00:07:36.395 Transport SGL Data Block: Not Supported 00:07:36.395 Replay Protected Memory Block: Not Supported 00:07:36.395 00:07:36.395 Firmware Slot Information 00:07:36.395 ========================= 00:07:36.395 Active slot: 1 00:07:36.395 Slot 1 Firmware Revision: 1.0 00:07:36.395 00:07:36.395 00:07:36.395 Commands Supported and Effects 00:07:36.395 ============================== 00:07:36.395 Admin Commands 00:07:36.395 -------------- 00:07:36.395 Delete I/O Submission Queue (00h): Supported 00:07:36.395 Create I/O Submission Queue (01h): Supported 00:07:36.395 Get Log Page (02h): Supported 00:07:36.395 Delete I/O Completion Queue (04h): Supported 00:07:36.395 Create I/O Completion Queue (05h): Supported 00:07:36.395 Identify (06h): Supported 00:07:36.395 Abort (08h): Supported 00:07:36.395 Set Features (09h): Supported 00:07:36.395 Get Features (0Ah): Supported 00:07:36.395 Asynchronous Event Request (0Ch): Supported 00:07:36.395 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.395 Directive Send (19h): Supported 00:07:36.395 Directive Receive (1Ah): Supported 00:07:36.395 Virtualization Management (1Ch): Supported 00:07:36.395 Doorbell Buffer Config (7Ch): Supported 00:07:36.395 Format NVM (80h): Supported LBA-Change 00:07:36.395 I/O Commands 00:07:36.395 ------------ 00:07:36.395 Flush (00h): Supported LBA-Change 00:07:36.395 Write (01h): Supported LBA-Change 00:07:36.395 Read (02h): Supported 00:07:36.395 Compare (05h): Supported 00:07:36.395 Write Zeroes (08h): Supported LBA-Change 00:07:36.395 Dataset Management (09h): Supported LBA-Change 00:07:36.395 Unknown (0Ch): Supported 00:07:36.395 Unknown (12h): Supported 00:07:36.395 Copy (19h): Supported LBA-Change 00:07:36.395 Unknown (1Dh): Supported LBA-Change 00:07:36.395 00:07:36.395 Error Log 00:07:36.395 ========= 00:07:36.395 00:07:36.395 Arbitration 00:07:36.395 =========== 00:07:36.395 Arbitration Burst: no limit 00:07:36.395 00:07:36.395 Power Management 00:07:36.395 ================ 00:07:36.395 Number of Power States: 1 00:07:36.395 Current Power State: Power State #0 00:07:36.395 Power State #0: 00:07:36.395 Max Power: 25.00 W 00:07:36.395 Non-Operational State: Operational 00:07:36.395 Entry Latency: 16 microseconds 00:07:36.395 Exit Latency: 4 microseconds 00:07:36.395 Relative Read Throughput: 0 00:07:36.395 Relative Read Latency: 0 00:07:36.395 Relative Write Throughput: 0 00:07:36.395 Relative Write Latency: 0 00:07:36.395 Idle Power: Not Reported 00:07:36.395 Active Power: Not Reported 00:07:36.395 Non-Operational Permissive Mode: Not Supported 00:07:36.395 00:07:36.395 Health Information 00:07:36.395 ================== 00:07:36.395 Critical Warnings: 00:07:36.395 Available Spare Space: OK 00:07:36.395 Temperature: OK 00:07:36.395 Device Reliability: OK 00:07:36.395 Read Only: No 00:07:36.395 Volatile Memory Backup: OK 00:07:36.395 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.395 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.395 Available Spare: 0% 00:07:36.395 Available Spare Threshold: 0% 00:07:36.395 Life Percentage Used: 0% 00:07:36.395 Data Units Read: 2124 00:07:36.395 Data Units Written: 1911 00:07:36.395 Host Read Commands: 103774 00:07:36.395 Host Write Commands: 102044 00:07:36.395 Controller Busy Time: 0 minutes 00:07:36.395 Power Cycles: 0 00:07:36.395 Power On Hours: 0 hours 00:07:36.395 Unsafe Shutdowns: 0 00:07:36.395 Unrecoverable Media Errors: 0 00:07:36.395 Lifetime Error Log Entries: 0 00:07:36.395 Warning Temperature Time: 0 minutes 00:07:36.395 Critical Temperature Time: 0 minutes 00:07:36.395 00:07:36.395 Number of Queues 00:07:36.395 ================ 00:07:36.395 Number of I/O Submission Queues: 64 00:07:36.395 Number of I/O Completion Queues: 64 00:07:36.395 00:07:36.395 ZNS Specific Controller Data 00:07:36.395 ============================ 00:07:36.395 Zone Append Size Limit: 0 00:07:36.395 00:07:36.395 00:07:36.395 Active Namespaces 00:07:36.395 ================= 00:07:36.395 Namespace ID:1 00:07:36.395 Error Recovery Timeout: Unlimited 00:07:36.395 Command Set Identifier: NVM (00h) 00:07:36.395 Deallocate: Supported 00:07:36.395 Deallocated/Unwritten Error: Supported 00:07:36.395 Deallocated Read Value: All 0x00 00:07:36.396 Deallocate in Write Zeroes: Not Supported 00:07:36.396 Deallocated Guard Field: 0xFFFF 00:07:36.396 Flush: Supported 00:07:36.396 Reservation: Not Supported 00:07:36.396 Namespace Sharing Capabilities: Private 00:07:36.396 Size (in LBAs): 1048576 (4GiB) 00:07:36.396 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.396 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.396 Thin Provisioning: Not Supported 00:07:36.396 Per-NS Atomic Units: No 00:07:36.396 Maximum Single Source Range Length: 128 00:07:36.396 Maximum Copy Length: 128 00:07:36.396 Maximum Source Range Count: 128 00:07:36.396 NGUID/EUI64 Never Reused: No 00:07:36.396 Namespace Write Protected: No 00:07:36.396 Number of LBA Formats: 8 00:07:36.396 Current LBA Format: LBA Format #04 00:07:36.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.396 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.396 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.396 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.396 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.396 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.396 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.396 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.396 00:07:36.396 NVM Specific Namespace Data 00:07:36.396 =========================== 00:07:36.396 Logical Block Storage Tag Mask: 0 00:07:36.396 Protection Information Capabilities: 00:07:36.396 16b Guard Protection Information Storage Tag Support: No 00:07:36.396 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.396 Storage Tag Check Read Support: No 00:07:36.396 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Namespace ID:2 00:07:36.396 Error Recovery Timeout: Unlimited 00:07:36.396 Command Set Identifier: NVM (00h) 00:07:36.396 Deallocate: Supported 00:07:36.396 Deallocated/Unwritten Error: Supported 00:07:36.396 Deallocated Read Value: All 0x00 00:07:36.396 Deallocate in Write Zeroes: Not Supported 00:07:36.396 Deallocated Guard Field: 0xFFFF 00:07:36.396 Flush: Supported 00:07:36.396 Reservation: Not Supported 00:07:36.396 Namespace Sharing Capabilities: Private 00:07:36.396 Size (in LBAs): 1048576 (4GiB) 00:07:36.396 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.396 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.396 Thin Provisioning: Not Supported 00:07:36.396 Per-NS Atomic Units: No 00:07:36.396 Maximum Single Source Range Length: 128 00:07:36.396 Maximum Copy Length: 128 00:07:36.396 Maximum Source Range Count: 128 00:07:36.396 NGUID/EUI64 Never Reused: No 00:07:36.396 Namespace Write Protected: No 00:07:36.396 Number of LBA Formats: 8 00:07:36.396 Current LBA Format: LBA Format #04 00:07:36.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.396 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.396 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.396 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.396 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.396 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.396 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.396 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.396 00:07:36.396 NVM Specific Namespace Data 00:07:36.396 =========================== 00:07:36.396 Logical Block Storage Tag Mask: 0 00:07:36.396 Protection Information Capabilities: 00:07:36.396 16b Guard Protection Information Storage Tag Support: No 00:07:36.396 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.396 Storage Tag Check Read Support: No 00:07:36.396 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Namespace ID:3 00:07:36.396 Error Recovery Timeout: Unlimited 00:07:36.396 Command Set Identifier: NVM (00h) 00:07:36.396 Deallocate: Supported 00:07:36.396 Deallocated/Unwritten Error: Supported 00:07:36.396 Deallocated Read Value: All 0x00 00:07:36.396 Deallocate in Write Zeroes: Not Supported 00:07:36.396 Deallocated Guard Field: 0xFFFF 00:07:36.396 Flush: Supported 00:07:36.396 Reservation: Not Supported 00:07:36.396 Namespace Sharing Capabilities: Private 00:07:36.396 Size (in LBAs): 1048576 (4GiB) 00:07:36.396 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.396 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.396 Thin Provisioning: Not Supported 00:07:36.396 Per-NS Atomic Units: No 00:07:36.396 Maximum Single Source Range Length: 128 00:07:36.396 Maximum Copy Length: 128 00:07:36.396 Maximum Source Range Count: 128 00:07:36.396 NGUID/EUI64 Never Reused: No 00:07:36.396 Namespace Write Protected: No 00:07:36.396 Number of LBA Formats: 8 00:07:36.396 Current LBA Format: LBA Format #04 00:07:36.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.396 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.396 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.396 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.396 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.396 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.396 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.396 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.396 00:07:36.396 NVM Specific Namespace Data 00:07:36.396 =========================== 00:07:36.396 Logical Block Storage Tag Mask: 0 00:07:36.396 Protection Information Capabilities: 00:07:36.396 16b Guard Protection Information Storage Tag Support: No 00:07:36.396 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.396 Storage Tag Check Read Support: No 00:07:36.396 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.396 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.396 19:57:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:36.396 ===================================================== 00:07:36.396 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.396 ===================================================== 00:07:36.396 Controller Capabilities/Features 00:07:36.396 ================================ 00:07:36.396 Vendor ID: 1b36 00:07:36.396 Subsystem Vendor ID: 1af4 00:07:36.396 Serial Number: 12343 00:07:36.396 Model Number: QEMU NVMe Ctrl 00:07:36.396 Firmware Version: 8.0.0 00:07:36.396 Recommended Arb Burst: 6 00:07:36.396 IEEE OUI Identifier: 00 54 52 00:07:36.396 Multi-path I/O 00:07:36.396 May have multiple subsystem ports: No 00:07:36.396 May have multiple controllers: Yes 00:07:36.396 Associated with SR-IOV VF: No 00:07:36.396 Max Data Transfer Size: 524288 00:07:36.396 Max Number of Namespaces: 256 00:07:36.396 Max Number of I/O Queues: 64 00:07:36.396 NVMe Specification Version (VS): 1.4 00:07:36.396 NVMe Specification Version (Identify): 1.4 00:07:36.396 Maximum Queue Entries: 2048 00:07:36.396 Contiguous Queues Required: Yes 00:07:36.396 Arbitration Mechanisms Supported 00:07:36.396 Weighted Round Robin: Not Supported 00:07:36.396 Vendor Specific: Not Supported 00:07:36.396 Reset Timeout: 7500 ms 00:07:36.396 Doorbell Stride: 4 bytes 00:07:36.396 NVM Subsystem Reset: Not Supported 00:07:36.396 Command Sets Supported 00:07:36.396 NVM Command Set: Supported 00:07:36.396 Boot Partition: Not Supported 00:07:36.397 Memory Page Size Minimum: 4096 bytes 00:07:36.397 Memory Page Size Maximum: 65536 bytes 00:07:36.397 Persistent Memory Region: Not Supported 00:07:36.397 Optional Asynchronous Events Supported 00:07:36.397 Namespace Attribute Notices: Supported 00:07:36.397 Firmware Activation Notices: Not Supported 00:07:36.397 ANA Change Notices: Not Supported 00:07:36.397 PLE Aggregate Log Change Notices: Not Supported 00:07:36.397 LBA Status Info Alert Notices: Not Supported 00:07:36.397 EGE Aggregate Log Change Notices: Not Supported 00:07:36.397 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.397 Zone Descriptor Change Notices: Not Supported 00:07:36.397 Discovery Log Change Notices: Not Supported 00:07:36.397 Controller Attributes 00:07:36.397 128-bit Host Identifier: Not Supported 00:07:36.397 Non-Operational Permissive Mode: Not Supported 00:07:36.397 NVM Sets: Not Supported 00:07:36.397 Read Recovery Levels: Not Supported 00:07:36.397 Endurance Groups: Supported 00:07:36.397 Predictable Latency Mode: Not Supported 00:07:36.397 Traffic Based Keep ALive: Not Supported 00:07:36.397 Namespace Granularity: Not Supported 00:07:36.397 SQ Associations: Not Supported 00:07:36.397 UUID List: Not Supported 00:07:36.397 Multi-Domain Subsystem: Not Supported 00:07:36.397 Fixed Capacity Management: Not Supported 00:07:36.397 Variable Capacity Management: Not Supported 00:07:36.397 Delete Endurance Group: Not Supported 00:07:36.397 Delete NVM Set: Not Supported 00:07:36.397 Extended LBA Formats Supported: Supported 00:07:36.397 Flexible Data Placement Supported: Supported 00:07:36.397 00:07:36.397 Controller Memory Buffer Support 00:07:36.397 ================================ 00:07:36.397 Supported: No 00:07:36.397 00:07:36.397 Persistent Memory Region Support 00:07:36.397 ================================ 00:07:36.397 Supported: No 00:07:36.397 00:07:36.397 Admin Command Set Attributes 00:07:36.397 ============================ 00:07:36.397 Security Send/Receive: Not Supported 00:07:36.397 Format NVM: Supported 00:07:36.397 Firmware Activate/Download: Not Supported 00:07:36.397 Namespace Management: Supported 00:07:36.397 Device Self-Test: Not Supported 00:07:36.397 Directives: Supported 00:07:36.397 NVMe-MI: Not Supported 00:07:36.397 Virtualization Management: Not Supported 00:07:36.397 Doorbell Buffer Config: Supported 00:07:36.397 Get LBA Status Capability: Not Supported 00:07:36.397 Command & Feature Lockdown Capability: Not Supported 00:07:36.397 Abort Command Limit: 4 00:07:36.397 Async Event Request Limit: 4 00:07:36.397 Number of Firmware Slots: N/A 00:07:36.397 Firmware Slot 1 Read-Only: N/A 00:07:36.397 Firmware Activation Without Reset: N/A 00:07:36.397 Multiple Update Detection Support: N/A 00:07:36.397 Firmware Update Granularity: No Information Provided 00:07:36.397 Per-Namespace SMART Log: Yes 00:07:36.397 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.397 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:36.397 Command Effects Log Page: Supported 00:07:36.397 Get Log Page Extended Data: Supported 00:07:36.397 Telemetry Log Pages: Not Supported 00:07:36.397 Persistent Event Log Pages: Not Supported 00:07:36.397 Supported Log Pages Log Page: May Support 00:07:36.397 Commands Supported & Effects Log Page: Not Supported 00:07:36.397 Feature Identifiers & Effects Log Page:May Support 00:07:36.397 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.397 Data Area 4 for Telemetry Log: Not Supported 00:07:36.397 Error Log Page Entries Supported: 1 00:07:36.397 Keep Alive: Not Supported 00:07:36.397 00:07:36.397 NVM Command Set Attributes 00:07:36.397 ========================== 00:07:36.397 Submission Queue Entry Size 00:07:36.397 Max: 64 00:07:36.397 Min: 64 00:07:36.397 Completion Queue Entry Size 00:07:36.397 Max: 16 00:07:36.397 Min: 16 00:07:36.397 Number of Namespaces: 256 00:07:36.397 Compare Command: Supported 00:07:36.397 Write Uncorrectable Command: Not Supported 00:07:36.397 Dataset Management Command: Supported 00:07:36.397 Write Zeroes Command: Supported 00:07:36.397 Set Features Save Field: Supported 00:07:36.397 Reservations: Not Supported 00:07:36.397 Timestamp: Supported 00:07:36.397 Copy: Supported 00:07:36.397 Volatile Write Cache: Present 00:07:36.397 Atomic Write Unit (Normal): 1 00:07:36.397 Atomic Write Unit (PFail): 1 00:07:36.397 Atomic Compare & Write Unit: 1 00:07:36.397 Fused Compare & Write: Not Supported 00:07:36.397 Scatter-Gather List 00:07:36.397 SGL Command Set: Supported 00:07:36.397 SGL Keyed: Not Supported 00:07:36.397 SGL Bit Bucket Descriptor: Not Supported 00:07:36.397 SGL Metadata Pointer: Not Supported 00:07:36.397 Oversized SGL: Not Supported 00:07:36.397 SGL Metadata Address: Not Supported 00:07:36.397 SGL Offset: Not Supported 00:07:36.397 Transport SGL Data Block: Not Supported 00:07:36.397 Replay Protected Memory Block: Not Supported 00:07:36.397 00:07:36.397 Firmware Slot Information 00:07:36.397 ========================= 00:07:36.397 Active slot: 1 00:07:36.397 Slot 1 Firmware Revision: 1.0 00:07:36.397 00:07:36.397 00:07:36.397 Commands Supported and Effects 00:07:36.397 ============================== 00:07:36.397 Admin Commands 00:07:36.397 -------------- 00:07:36.397 Delete I/O Submission Queue (00h): Supported 00:07:36.397 Create I/O Submission Queue (01h): Supported 00:07:36.397 Get Log Page (02h): Supported 00:07:36.397 Delete I/O Completion Queue (04h): Supported 00:07:36.397 Create I/O Completion Queue (05h): Supported 00:07:36.397 Identify (06h): Supported 00:07:36.397 Abort (08h): Supported 00:07:36.397 Set Features (09h): Supported 00:07:36.397 Get Features (0Ah): Supported 00:07:36.397 Asynchronous Event Request (0Ch): Supported 00:07:36.397 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.397 Directive Send (19h): Supported 00:07:36.397 Directive Receive (1Ah): Supported 00:07:36.397 Virtualization Management (1Ch): Supported 00:07:36.397 Doorbell Buffer Config (7Ch): Supported 00:07:36.397 Format NVM (80h): Supported LBA-Change 00:07:36.397 I/O Commands 00:07:36.397 ------------ 00:07:36.397 Flush (00h): Supported LBA-Change 00:07:36.397 Write (01h): Supported LBA-Change 00:07:36.397 Read (02h): Supported 00:07:36.397 Compare (05h): Supported 00:07:36.397 Write Zeroes (08h): Supported LBA-Change 00:07:36.397 Dataset Management (09h): Supported LBA-Change 00:07:36.397 Unknown (0Ch): Supported 00:07:36.397 Unknown (12h): Supported 00:07:36.397 Copy (19h): Supported LBA-Change 00:07:36.397 Unknown (1Dh): Supported LBA-Change 00:07:36.397 00:07:36.397 Error Log 00:07:36.397 ========= 00:07:36.397 00:07:36.397 Arbitration 00:07:36.397 =========== 00:07:36.397 Arbitration Burst: no limit 00:07:36.397 00:07:36.397 Power Management 00:07:36.397 ================ 00:07:36.397 Number of Power States: 1 00:07:36.397 Current Power State: Power State #0 00:07:36.397 Power State #0: 00:07:36.397 Max Power: 25.00 W 00:07:36.397 Non-Operational State: Operational 00:07:36.397 Entry Latency: 16 microseconds 00:07:36.397 Exit Latency: 4 microseconds 00:07:36.397 Relative Read Throughput: 0 00:07:36.397 Relative Read Latency: 0 00:07:36.397 Relative Write Throughput: 0 00:07:36.397 Relative Write Latency: 0 00:07:36.397 Idle Power: Not Reported 00:07:36.397 Active Power: Not Reported 00:07:36.397 Non-Operational Permissive Mode: Not Supported 00:07:36.397 00:07:36.397 Health Information 00:07:36.397 ================== 00:07:36.397 Critical Warnings: 00:07:36.397 Available Spare Space: OK 00:07:36.397 Temperature: OK 00:07:36.397 Device Reliability: OK 00:07:36.397 Read Only: No 00:07:36.397 Volatile Memory Backup: OK 00:07:36.397 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.397 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.397 Available Spare: 0% 00:07:36.397 Available Spare Threshold: 0% 00:07:36.397 Life Percentage Used: 0% 00:07:36.397 Data Units Read: 814 00:07:36.397 Data Units Written: 743 00:07:36.397 Host Read Commands: 35416 00:07:36.397 Host Write Commands: 34840 00:07:36.397 Controller Busy Time: 0 minutes 00:07:36.397 Power Cycles: 0 00:07:36.397 Power On Hours: 0 hours 00:07:36.397 Unsafe Shutdowns: 0 00:07:36.397 Unrecoverable Media Errors: 0 00:07:36.397 Lifetime Error Log Entries: 0 00:07:36.397 Warning Temperature Time: 0 minutes 00:07:36.397 Critical Temperature Time: 0 minutes 00:07:36.397 00:07:36.397 Number of Queues 00:07:36.397 ================ 00:07:36.397 Number of I/O Submission Queues: 64 00:07:36.397 Number of I/O Completion Queues: 64 00:07:36.397 00:07:36.397 ZNS Specific Controller Data 00:07:36.398 ============================ 00:07:36.398 Zone Append Size Limit: 0 00:07:36.398 00:07:36.398 00:07:36.398 Active Namespaces 00:07:36.398 ================= 00:07:36.398 Namespace ID:1 00:07:36.398 Error Recovery Timeout: Unlimited 00:07:36.398 Command Set Identifier: NVM (00h) 00:07:36.398 Deallocate: Supported 00:07:36.398 Deallocated/Unwritten Error: Supported 00:07:36.398 Deallocated Read Value: All 0x00 00:07:36.398 Deallocate in Write Zeroes: Not Supported 00:07:36.398 Deallocated Guard Field: 0xFFFF 00:07:36.398 Flush: Supported 00:07:36.398 Reservation: Not Supported 00:07:36.398 Namespace Sharing Capabilities: Multiple Controllers 00:07:36.398 Size (in LBAs): 262144 (1GiB) 00:07:36.398 Capacity (in LBAs): 262144 (1GiB) 00:07:36.398 Utilization (in LBAs): 262144 (1GiB) 00:07:36.398 Thin Provisioning: Not Supported 00:07:36.398 Per-NS Atomic Units: No 00:07:36.398 Maximum Single Source Range Length: 128 00:07:36.398 Maximum Copy Length: 128 00:07:36.398 Maximum Source Range Count: 128 00:07:36.398 NGUID/EUI64 Never Reused: No 00:07:36.398 Namespace Write Protected: No 00:07:36.398 Endurance group ID: 1 00:07:36.398 Number of LBA Formats: 8 00:07:36.398 Current LBA Format: LBA Format #04 00:07:36.398 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.398 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.398 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.398 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.398 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.398 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.398 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.398 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.398 00:07:36.398 Get Feature FDP: 00:07:36.398 ================ 00:07:36.398 Enabled: Yes 00:07:36.398 FDP configuration index: 0 00:07:36.398 00:07:36.398 FDP configurations log page 00:07:36.398 =========================== 00:07:36.398 Number of FDP configurations: 1 00:07:36.398 Version: 0 00:07:36.398 Size: 112 00:07:36.398 FDP Configuration Descriptor: 0 00:07:36.398 Descriptor Size: 96 00:07:36.398 Reclaim Group Identifier format: 2 00:07:36.398 FDP Volatile Write Cache: Not Present 00:07:36.398 FDP Configuration: Valid 00:07:36.398 Vendor Specific Size: 0 00:07:36.398 Number of Reclaim Groups: 2 00:07:36.398 Number of Recalim Unit Handles: 8 00:07:36.398 Max Placement Identifiers: 128 00:07:36.398 Number of Namespaces Suppprted: 256 00:07:36.398 Reclaim unit Nominal Size: 6000000 bytes 00:07:36.398 Estimated Reclaim Unit Time Limit: Not Reported 00:07:36.398 RUH Desc #000: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #001: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #002: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #003: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #004: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #005: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #006: RUH Type: Initially Isolated 00:07:36.398 RUH Desc #007: RUH Type: Initially Isolated 00:07:36.398 00:07:36.398 FDP reclaim unit handle usage log page 00:07:36.658 ====================================== 00:07:36.658 Number of Reclaim Unit Handles: 8 00:07:36.658 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:36.658 RUH Usage Desc #001: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #002: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #003: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #004: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #005: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #006: RUH Attributes: Unused 00:07:36.658 RUH Usage Desc #007: RUH Attributes: Unused 00:07:36.658 00:07:36.658 FDP statistics log page 00:07:36.658 ======================= 00:07:36.658 Host bytes with metadata written: 450666496 00:07:36.658 Media bytes with metadata written: 450719744 00:07:36.658 Media bytes erased: 0 00:07:36.658 00:07:36.658 FDP events log page 00:07:36.658 =================== 00:07:36.658 Number of FDP events: 0 00:07:36.658 00:07:36.658 NVM Specific Namespace Data 00:07:36.658 =========================== 00:07:36.658 Logical Block Storage Tag Mask: 0 00:07:36.658 Protection Information Capabilities: 00:07:36.658 16b Guard Protection Information Storage Tag Support: No 00:07:36.658 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.658 Storage Tag Check Read Support: No 00:07:36.658 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.658 00:07:36.658 real 0m1.294s 00:07:36.658 user 0m0.456s 00:07:36.658 sys 0m0.611s 00:07:36.658 19:57:10 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.658 ************************************ 00:07:36.658 END TEST nvme_identify 00:07:36.658 ************************************ 00:07:36.658 19:57:10 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:36.658 19:57:10 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:36.658 19:57:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.658 19:57:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.658 19:57:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.658 ************************************ 00:07:36.658 START TEST nvme_perf 00:07:36.658 ************************************ 00:07:36.658 19:57:10 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:36.658 19:57:10 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:38.086 Initializing NVMe Controllers 00:07:38.086 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.086 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.086 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.086 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.086 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:38.086 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:38.086 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:38.086 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:38.086 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:38.086 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:38.086 Initialization complete. Launching workers. 00:07:38.086 ======================================================== 00:07:38.086 Latency(us) 00:07:38.086 Device Information : IOPS MiB/s Average min max 00:07:38.086 PCIE (0000:00:13.0) NSID 1 from core 0: 7770.95 91.07 16506.25 12196.41 36703.99 00:07:38.086 PCIE (0000:00:10.0) NSID 1 from core 0: 7770.95 91.07 16485.76 12328.74 35451.31 00:07:38.086 PCIE (0000:00:11.0) NSID 1 from core 0: 7770.95 91.07 16462.93 12290.48 34652.03 00:07:38.086 PCIE (0000:00:12.0) NSID 1 from core 0: 7770.95 91.07 16438.89 10961.82 34617.63 00:07:38.086 PCIE (0000:00:12.0) NSID 2 from core 0: 7770.95 91.07 16415.05 10891.20 33918.14 00:07:38.086 PCIE (0000:00:12.0) NSID 3 from core 0: 7834.64 91.81 16257.96 10315.04 24749.88 00:07:38.086 ======================================================== 00:07:38.086 Total : 46689.39 547.14 16427.57 10315.04 36703.99 00:07:38.086 00:07:38.086 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:38.086 ================================================================================= 00:07:38.086 1.00000% : 12754.314us 00:07:38.086 10.00000% : 14014.622us 00:07:38.086 25.00000% : 15022.868us 00:07:38.086 50.00000% : 16232.763us 00:07:38.086 75.00000% : 17644.308us 00:07:38.086 90.00000% : 18652.554us 00:07:38.086 95.00000% : 19156.677us 00:07:38.086 98.00000% : 20870.695us 00:07:38.086 99.00000% : 30045.735us 00:07:38.086 99.50000% : 35893.563us 00:07:38.086 99.90000% : 36700.160us 00:07:38.086 99.99000% : 36901.809us 00:07:38.086 99.99900% : 36901.809us 00:07:38.086 99.99990% : 36901.809us 00:07:38.086 99.99999% : 36901.809us 00:07:38.086 00:07:38.086 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:38.086 ================================================================================= 00:07:38.086 1.00000% : 12855.138us 00:07:38.086 10.00000% : 13913.797us 00:07:38.086 25.00000% : 14922.043us 00:07:38.086 50.00000% : 16333.588us 00:07:38.086 75.00000% : 17644.308us 00:07:38.086 90.00000% : 18854.203us 00:07:38.086 95.00000% : 19459.151us 00:07:38.086 98.00000% : 20971.520us 00:07:38.086 99.00000% : 28634.191us 00:07:38.086 99.50000% : 34683.668us 00:07:38.086 99.90000% : 35288.615us 00:07:38.086 99.99000% : 35490.265us 00:07:38.086 99.99900% : 35490.265us 00:07:38.086 99.99990% : 35490.265us 00:07:38.086 99.99999% : 35490.265us 00:07:38.086 00:07:38.086 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:38.086 ================================================================================= 00:07:38.086 1.00000% : 13006.375us 00:07:38.086 10.00000% : 13913.797us 00:07:38.086 25.00000% : 14922.043us 00:07:38.086 50.00000% : 16333.588us 00:07:38.086 75.00000% : 17644.308us 00:07:38.086 90.00000% : 18854.203us 00:07:38.086 95.00000% : 19559.975us 00:07:38.086 98.00000% : 20669.046us 00:07:38.086 99.00000% : 27222.646us 00:07:38.086 99.50000% : 33675.422us 00:07:38.086 99.90000% : 34482.018us 00:07:38.086 99.99000% : 34683.668us 00:07:38.086 99.99900% : 34683.668us 00:07:38.086 99.99990% : 34683.668us 00:07:38.086 99.99999% : 34683.668us 00:07:38.086 00:07:38.086 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:38.086 ================================================================================= 00:07:38.086 1.00000% : 12098.954us 00:07:38.086 10.00000% : 14014.622us 00:07:38.086 25.00000% : 15022.868us 00:07:38.086 50.00000% : 16333.588us 00:07:38.086 75.00000% : 17442.658us 00:07:38.086 90.00000% : 18955.028us 00:07:38.086 95.00000% : 19862.449us 00:07:38.086 98.00000% : 20971.520us 00:07:38.086 99.00000% : 26416.049us 00:07:38.086 99.50000% : 33675.422us 00:07:38.086 99.90000% : 34482.018us 00:07:38.086 99.99000% : 34683.668us 00:07:38.086 99.99900% : 34683.668us 00:07:38.086 99.99990% : 34683.668us 00:07:38.086 99.99999% : 34683.668us 00:07:38.086 00:07:38.086 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:38.086 ================================================================================= 00:07:38.087 1.00000% : 11796.480us 00:07:38.087 10.00000% : 14014.622us 00:07:38.087 25.00000% : 15022.868us 00:07:38.087 50.00000% : 16232.763us 00:07:38.087 75.00000% : 17543.483us 00:07:38.087 90.00000% : 18753.378us 00:07:38.087 95.00000% : 19559.975us 00:07:38.087 98.00000% : 21273.994us 00:07:38.087 99.00000% : 24702.031us 00:07:38.087 99.50000% : 32868.825us 00:07:38.087 99.90000% : 33877.071us 00:07:38.087 99.99000% : 34078.720us 00:07:38.087 99.99900% : 34078.720us 00:07:38.087 99.99990% : 34078.720us 00:07:38.087 99.99999% : 34078.720us 00:07:38.087 00:07:38.087 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:38.087 ================================================================================= 00:07:38.087 1.00000% : 11746.068us 00:07:38.087 10.00000% : 13913.797us 00:07:38.087 25.00000% : 14922.043us 00:07:38.087 50.00000% : 16232.763us 00:07:38.087 75.00000% : 17644.308us 00:07:38.087 90.00000% : 18753.378us 00:07:38.087 95.00000% : 19257.502us 00:07:38.087 98.00000% : 20164.923us 00:07:38.087 99.00000% : 21878.942us 00:07:38.087 99.50000% : 23895.434us 00:07:38.087 99.90000% : 24601.206us 00:07:38.087 99.99000% : 24802.855us 00:07:38.087 99.99900% : 24802.855us 00:07:38.087 99.99990% : 24802.855us 00:07:38.087 99.99999% : 24802.855us 00:07:38.087 00:07:38.087 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:38.087 ============================================================================== 00:07:38.087 Range in us Cumulative IO count 00:07:38.087 12149.366 - 12199.778: 0.0128% ( 1) 00:07:38.087 12199.778 - 12250.191: 0.0384% ( 2) 00:07:38.087 12250.191 - 12300.603: 0.1537% ( 9) 00:07:38.087 12300.603 - 12351.015: 0.2690% ( 9) 00:07:38.087 12351.015 - 12401.428: 0.3842% ( 9) 00:07:38.087 12401.428 - 12451.840: 0.4483% ( 5) 00:07:38.087 12451.840 - 12502.252: 0.4867% ( 3) 00:07:38.087 12502.252 - 12552.665: 0.5891% ( 8) 00:07:38.087 12552.665 - 12603.077: 0.6660% ( 6) 00:07:38.087 12603.077 - 12653.489: 0.7941% ( 10) 00:07:38.087 12653.489 - 12703.902: 0.9093% ( 9) 00:07:38.087 12703.902 - 12754.314: 1.0246% ( 9) 00:07:38.087 12754.314 - 12804.726: 1.1399% ( 9) 00:07:38.087 12804.726 - 12855.138: 1.2935% ( 12) 00:07:38.087 12855.138 - 12905.551: 1.4600% ( 13) 00:07:38.087 12905.551 - 13006.375: 1.8443% ( 30) 00:07:38.087 13006.375 - 13107.200: 2.2669% ( 33) 00:07:38.087 13107.200 - 13208.025: 2.7280% ( 36) 00:07:38.087 13208.025 - 13308.849: 3.2275% ( 39) 00:07:38.087 13308.849 - 13409.674: 3.9191% ( 54) 00:07:38.087 13409.674 - 13510.498: 4.7643% ( 66) 00:07:38.087 13510.498 - 13611.323: 5.6224% ( 67) 00:07:38.087 13611.323 - 13712.148: 6.8263% ( 94) 00:07:38.087 13712.148 - 13812.972: 8.0815% ( 98) 00:07:38.087 13812.972 - 13913.797: 9.2469% ( 91) 00:07:38.087 13913.797 - 14014.622: 10.5405% ( 101) 00:07:38.087 14014.622 - 14115.446: 11.9877% ( 113) 00:07:38.087 14115.446 - 14216.271: 13.4606% ( 115) 00:07:38.087 14216.271 - 14317.095: 14.9718% ( 118) 00:07:38.087 14317.095 - 14417.920: 16.4703% ( 117) 00:07:38.087 14417.920 - 14518.745: 18.1865% ( 134) 00:07:38.087 14518.745 - 14619.569: 19.9283% ( 136) 00:07:38.087 14619.569 - 14720.394: 21.5804% ( 129) 00:07:38.087 14720.394 - 14821.218: 23.1301% ( 121) 00:07:38.087 14821.218 - 14922.043: 24.9103% ( 139) 00:07:38.087 14922.043 - 15022.868: 26.5113% ( 125) 00:07:38.087 15022.868 - 15123.692: 28.1122% ( 125) 00:07:38.087 15123.692 - 15224.517: 29.9180% ( 141) 00:07:38.087 15224.517 - 15325.342: 32.0441% ( 166) 00:07:38.087 15325.342 - 15426.166: 34.0420% ( 156) 00:07:38.087 15426.166 - 15526.991: 36.0400% ( 156) 00:07:38.087 15526.991 - 15627.815: 38.1148% ( 162) 00:07:38.087 15627.815 - 15728.640: 40.1895% ( 162) 00:07:38.087 15728.640 - 15829.465: 42.3156% ( 166) 00:07:38.087 15829.465 - 15930.289: 44.2495% ( 151) 00:07:38.087 15930.289 - 16031.114: 46.3755% ( 166) 00:07:38.087 16031.114 - 16131.938: 48.3863% ( 157) 00:07:38.087 16131.938 - 16232.763: 50.3074% ( 150) 00:07:38.087 16232.763 - 16333.588: 51.9595% ( 129) 00:07:38.087 16333.588 - 16434.412: 53.9319% ( 154) 00:07:38.087 16434.412 - 16535.237: 55.8402% ( 149) 00:07:38.087 16535.237 - 16636.062: 57.7741% ( 151) 00:07:38.087 16636.062 - 16736.886: 59.6696% ( 148) 00:07:38.087 16736.886 - 16837.711: 61.4754% ( 141) 00:07:38.087 16837.711 - 16938.535: 63.2172% ( 136) 00:07:38.087 16938.535 - 17039.360: 64.7797% ( 122) 00:07:38.087 17039.360 - 17140.185: 66.4191% ( 128) 00:07:38.087 17140.185 - 17241.009: 68.1481% ( 135) 00:07:38.087 17241.009 - 17341.834: 69.6337% ( 116) 00:07:38.087 17341.834 - 17442.658: 71.3755% ( 136) 00:07:38.087 17442.658 - 17543.483: 73.3094% ( 151) 00:07:38.087 17543.483 - 17644.308: 75.3458% ( 159) 00:07:38.087 17644.308 - 17745.132: 77.0492% ( 133) 00:07:38.087 17745.132 - 17845.957: 78.5989% ( 121) 00:07:38.087 17845.957 - 17946.782: 79.9693% ( 107) 00:07:38.087 17946.782 - 18047.606: 81.4421% ( 115) 00:07:38.087 18047.606 - 18148.431: 82.8253% ( 108) 00:07:38.087 18148.431 - 18249.255: 84.3622% ( 120) 00:07:38.087 18249.255 - 18350.080: 85.9887% ( 127) 00:07:38.087 18350.080 - 18450.905: 87.5384% ( 121) 00:07:38.087 18450.905 - 18551.729: 88.9216% ( 108) 00:07:38.087 18551.729 - 18652.554: 90.2536% ( 104) 00:07:38.087 18652.554 - 18753.378: 91.3550% ( 86) 00:07:38.087 18753.378 - 18854.203: 92.4693% ( 87) 00:07:38.087 18854.203 - 18955.028: 93.4426% ( 76) 00:07:38.087 18955.028 - 19055.852: 94.3648% ( 72) 00:07:38.087 19055.852 - 19156.677: 95.0179% ( 51) 00:07:38.087 19156.677 - 19257.502: 95.5046% ( 38) 00:07:38.087 19257.502 - 19358.326: 95.8632% ( 28) 00:07:38.087 19358.326 - 19459.151: 96.1834% ( 25) 00:07:38.087 19459.151 - 19559.975: 96.4267% ( 19) 00:07:38.087 19559.975 - 19660.800: 96.5548% ( 10) 00:07:38.087 19660.800 - 19761.625: 96.7341% ( 14) 00:07:38.087 19761.625 - 19862.449: 96.8878% ( 12) 00:07:38.087 19862.449 - 19963.274: 96.9518% ( 5) 00:07:38.087 19963.274 - 20064.098: 97.0543% ( 8) 00:07:38.087 20064.098 - 20164.923: 97.1952% ( 11) 00:07:38.087 20164.923 - 20265.748: 97.3361% ( 11) 00:07:38.087 20265.748 - 20366.572: 97.4641% ( 10) 00:07:38.087 20366.572 - 20467.397: 97.6050% ( 11) 00:07:38.087 20467.397 - 20568.222: 97.7203% ( 9) 00:07:38.087 20568.222 - 20669.046: 97.8484% ( 10) 00:07:38.087 20669.046 - 20769.871: 97.9764% ( 10) 00:07:38.087 20769.871 - 20870.695: 98.1045% ( 10) 00:07:38.087 20870.695 - 20971.520: 98.2198% ( 9) 00:07:38.087 20971.520 - 21072.345: 98.2966% ( 6) 00:07:38.087 21072.345 - 21173.169: 98.3607% ( 5) 00:07:38.087 28835.840 - 29037.489: 98.4887% ( 10) 00:07:38.087 29037.489 - 29239.138: 98.5912% ( 8) 00:07:38.087 29239.138 - 29440.788: 98.7065% ( 9) 00:07:38.087 29440.788 - 29642.437: 98.8217% ( 9) 00:07:38.087 29642.437 - 29844.086: 98.9242% ( 8) 00:07:38.087 29844.086 - 30045.735: 99.0266% ( 8) 00:07:38.087 30045.735 - 30247.385: 99.1419% ( 9) 00:07:38.087 30247.385 - 30449.034: 99.1803% ( 3) 00:07:38.087 34078.720 - 34280.369: 99.1931% ( 1) 00:07:38.087 35086.966 - 35288.615: 99.2316% ( 3) 00:07:38.087 35288.615 - 35490.265: 99.3340% ( 8) 00:07:38.087 35490.265 - 35691.914: 99.4493% ( 9) 00:07:38.087 35691.914 - 35893.563: 99.5517% ( 8) 00:07:38.087 35893.563 - 36095.212: 99.6670% ( 9) 00:07:38.087 36095.212 - 36296.862: 99.7823% ( 9) 00:07:38.087 36296.862 - 36498.511: 99.8847% ( 8) 00:07:38.087 36498.511 - 36700.160: 99.9872% ( 8) 00:07:38.087 36700.160 - 36901.809: 100.0000% ( 1) 00:07:38.087 00:07:38.087 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:38.087 ============================================================================== 00:07:38.087 Range in us Cumulative IO count 00:07:38.087 12300.603 - 12351.015: 0.0256% ( 2) 00:07:38.087 12351.015 - 12401.428: 0.0768% ( 4) 00:07:38.087 12401.428 - 12451.840: 0.1153% ( 3) 00:07:38.087 12451.840 - 12502.252: 0.2049% ( 7) 00:07:38.087 12502.252 - 12552.665: 0.2818% ( 6) 00:07:38.087 12552.665 - 12603.077: 0.3970% ( 9) 00:07:38.087 12603.077 - 12653.489: 0.4995% ( 8) 00:07:38.087 12653.489 - 12703.902: 0.7300% ( 18) 00:07:38.087 12703.902 - 12754.314: 0.7812% ( 4) 00:07:38.087 12754.314 - 12804.726: 0.9606% ( 14) 00:07:38.087 12804.726 - 12855.138: 1.0502% ( 7) 00:07:38.087 12855.138 - 12905.551: 1.1399% ( 7) 00:07:38.087 12905.551 - 13006.375: 1.4088% ( 21) 00:07:38.087 13006.375 - 13107.200: 1.9083% ( 39) 00:07:38.087 13107.200 - 13208.025: 2.9713% ( 83) 00:07:38.087 13208.025 - 13308.849: 3.7782% ( 63) 00:07:38.087 13308.849 - 13409.674: 5.1230% ( 105) 00:07:38.087 13409.674 - 13510.498: 6.0579% ( 73) 00:07:38.087 13510.498 - 13611.323: 7.4539% ( 109) 00:07:38.087 13611.323 - 13712.148: 8.4016% ( 74) 00:07:38.087 13712.148 - 13812.972: 9.7592% ( 106) 00:07:38.087 13812.972 - 13913.797: 10.8863% ( 88) 00:07:38.087 13913.797 - 14014.622: 12.2054% ( 103) 00:07:38.087 14014.622 - 14115.446: 13.8704% ( 130) 00:07:38.087 14115.446 - 14216.271: 15.4969% ( 127) 00:07:38.087 14216.271 - 14317.095: 16.8161% ( 103) 00:07:38.087 14317.095 - 14417.920: 18.2505% ( 112) 00:07:38.088 14417.920 - 14518.745: 19.6465% ( 109) 00:07:38.088 14518.745 - 14619.569: 20.9785% ( 104) 00:07:38.088 14619.569 - 14720.394: 22.2208% ( 97) 00:07:38.088 14720.394 - 14821.218: 23.6936% ( 115) 00:07:38.088 14821.218 - 14922.043: 25.1665% ( 115) 00:07:38.088 14922.043 - 15022.868: 26.6265% ( 114) 00:07:38.088 15022.868 - 15123.692: 28.4836% ( 145) 00:07:38.088 15123.692 - 15224.517: 30.3407% ( 145) 00:07:38.088 15224.517 - 15325.342: 32.1593% ( 142) 00:07:38.088 15325.342 - 15426.166: 34.1829% ( 158) 00:07:38.088 15426.166 - 15526.991: 36.3217% ( 167) 00:07:38.088 15526.991 - 15627.815: 38.3837% ( 161) 00:07:38.088 15627.815 - 15728.640: 40.4841% ( 164) 00:07:38.088 15728.640 - 15829.465: 42.3028% ( 142) 00:07:38.088 15829.465 - 15930.289: 44.0574% ( 137) 00:07:38.088 15930.289 - 16031.114: 46.4011% ( 183) 00:07:38.088 16031.114 - 16131.938: 48.1045% ( 133) 00:07:38.088 16131.938 - 16232.763: 49.8335% ( 135) 00:07:38.088 16232.763 - 16333.588: 51.5625% ( 135) 00:07:38.088 16333.588 - 16434.412: 53.1890% ( 127) 00:07:38.088 16434.412 - 16535.237: 55.1742% ( 155) 00:07:38.088 16535.237 - 16636.062: 56.8519% ( 131) 00:07:38.088 16636.062 - 16736.886: 58.4913% ( 128) 00:07:38.088 16736.886 - 16837.711: 60.1306% ( 128) 00:07:38.088 16837.711 - 16938.535: 62.4616% ( 182) 00:07:38.088 16938.535 - 17039.360: 64.4083% ( 152) 00:07:38.088 17039.360 - 17140.185: 66.2269% ( 142) 00:07:38.088 17140.185 - 17241.009: 68.1609% ( 151) 00:07:38.088 17241.009 - 17341.834: 70.3765% ( 173) 00:07:38.088 17341.834 - 17442.658: 72.1952% ( 142) 00:07:38.088 17442.658 - 17543.483: 73.8986% ( 133) 00:07:38.088 17543.483 - 17644.308: 76.0374% ( 167) 00:07:38.088 17644.308 - 17745.132: 77.5743% ( 120) 00:07:38.088 17745.132 - 17845.957: 79.2520% ( 131) 00:07:38.088 17845.957 - 17946.782: 80.6993% ( 113) 00:07:38.088 17946.782 - 18047.606: 81.8135% ( 87) 00:07:38.088 18047.606 - 18148.431: 82.9662% ( 90) 00:07:38.088 18148.431 - 18249.255: 84.1829% ( 95) 00:07:38.088 18249.255 - 18350.080: 85.2587% ( 84) 00:07:38.088 18350.080 - 18450.905: 86.2705% ( 79) 00:07:38.088 18450.905 - 18551.729: 87.5640% ( 101) 00:07:38.088 18551.729 - 18652.554: 88.5246% ( 75) 00:07:38.088 18652.554 - 18753.378: 89.2930% ( 60) 00:07:38.088 18753.378 - 18854.203: 90.3432% ( 82) 00:07:38.088 18854.203 - 18955.028: 91.2398% ( 70) 00:07:38.088 18955.028 - 19055.852: 92.0978% ( 67) 00:07:38.088 19055.852 - 19156.677: 93.0456% ( 74) 00:07:38.088 19156.677 - 19257.502: 93.8012% ( 59) 00:07:38.088 19257.502 - 19358.326: 94.4416% ( 50) 00:07:38.088 19358.326 - 19459.151: 95.0307% ( 46) 00:07:38.088 19459.151 - 19559.975: 95.7351% ( 55) 00:07:38.088 19559.975 - 19660.800: 95.9529% ( 17) 00:07:38.088 19660.800 - 19761.625: 96.3627% ( 32) 00:07:38.088 19761.625 - 19862.449: 96.7725% ( 32) 00:07:38.088 19862.449 - 19963.274: 96.9390% ( 13) 00:07:38.088 19963.274 - 20064.098: 97.1440% ( 16) 00:07:38.088 20064.098 - 20164.923: 97.3617% ( 17) 00:07:38.088 20164.923 - 20265.748: 97.5410% ( 14) 00:07:38.088 20265.748 - 20366.572: 97.6819% ( 11) 00:07:38.088 20366.572 - 20467.397: 97.7459% ( 5) 00:07:38.088 20467.397 - 20568.222: 97.7971% ( 4) 00:07:38.088 20568.222 - 20669.046: 97.8484% ( 4) 00:07:38.088 20669.046 - 20769.871: 97.8996% ( 4) 00:07:38.088 20769.871 - 20870.695: 97.9636% ( 5) 00:07:38.088 20870.695 - 20971.520: 98.0277% ( 5) 00:07:38.088 20971.520 - 21072.345: 98.0661% ( 3) 00:07:38.088 21072.345 - 21173.169: 98.1173% ( 4) 00:07:38.088 21173.169 - 21273.994: 98.1814% ( 5) 00:07:38.088 21273.994 - 21374.818: 98.2070% ( 2) 00:07:38.088 21374.818 - 21475.643: 98.2710% ( 5) 00:07:38.088 21475.643 - 21576.468: 98.3350% ( 5) 00:07:38.088 21576.468 - 21677.292: 98.3607% ( 2) 00:07:38.088 27020.997 - 27222.646: 98.3735% ( 1) 00:07:38.088 27222.646 - 27424.295: 98.4631% ( 7) 00:07:38.088 27424.295 - 27625.945: 98.5400% ( 6) 00:07:38.088 27625.945 - 27827.594: 98.6424% ( 8) 00:07:38.088 27827.594 - 28029.243: 98.7449% ( 8) 00:07:38.088 28029.243 - 28230.892: 98.8473% ( 8) 00:07:38.088 28230.892 - 28432.542: 98.9498% ( 8) 00:07:38.088 28432.542 - 28634.191: 99.0523% ( 8) 00:07:38.088 28634.191 - 28835.840: 99.1163% ( 5) 00:07:38.088 28835.840 - 29037.489: 99.1803% ( 5) 00:07:38.088 33675.422 - 33877.071: 99.2700% ( 7) 00:07:38.088 33877.071 - 34078.720: 99.3084% ( 3) 00:07:38.088 34078.720 - 34280.369: 99.3852% ( 6) 00:07:38.088 34280.369 - 34482.018: 99.4877% ( 8) 00:07:38.088 34482.018 - 34683.668: 99.6158% ( 10) 00:07:38.088 34683.668 - 34885.317: 99.7310% ( 9) 00:07:38.088 34885.317 - 35086.966: 99.8207% ( 7) 00:07:38.088 35086.966 - 35288.615: 99.9103% ( 7) 00:07:38.088 35288.615 - 35490.265: 100.0000% ( 7) 00:07:38.088 00:07:38.088 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:38.088 ============================================================================== 00:07:38.088 Range in us Cumulative IO count 00:07:38.088 12250.191 - 12300.603: 0.0384% ( 3) 00:07:38.088 12300.603 - 12351.015: 0.0640% ( 2) 00:07:38.088 12351.015 - 12401.428: 0.0897% ( 2) 00:07:38.088 12401.428 - 12451.840: 0.1281% ( 3) 00:07:38.088 12451.840 - 12502.252: 0.1665% ( 3) 00:07:38.088 12502.252 - 12552.665: 0.2177% ( 4) 00:07:38.088 12552.665 - 12603.077: 0.2561% ( 3) 00:07:38.088 12603.077 - 12653.489: 0.2946% ( 3) 00:07:38.088 12653.489 - 12703.902: 0.3458% ( 4) 00:07:38.088 12703.902 - 12754.314: 0.4226% ( 6) 00:07:38.088 12754.314 - 12804.726: 0.5635% ( 11) 00:07:38.088 12804.726 - 12855.138: 0.7300% ( 13) 00:07:38.088 12855.138 - 12905.551: 0.8453% ( 9) 00:07:38.088 12905.551 - 13006.375: 1.1911% ( 27) 00:07:38.088 13006.375 - 13107.200: 1.7034% ( 40) 00:07:38.088 13107.200 - 13208.025: 2.5615% ( 67) 00:07:38.088 13208.025 - 13308.849: 3.5348% ( 76) 00:07:38.088 13308.849 - 13409.674: 4.6491% ( 87) 00:07:38.088 13409.674 - 13510.498: 5.6481% ( 78) 00:07:38.088 13510.498 - 13611.323: 6.6342% ( 77) 00:07:38.088 13611.323 - 13712.148: 7.7357% ( 86) 00:07:38.088 13712.148 - 13812.972: 8.9652% ( 96) 00:07:38.088 13812.972 - 13913.797: 10.2715% ( 102) 00:07:38.088 13913.797 - 14014.622: 11.6419% ( 107) 00:07:38.088 14014.622 - 14115.446: 13.0891% ( 113) 00:07:38.088 14115.446 - 14216.271: 14.3827% ( 101) 00:07:38.088 14216.271 - 14317.095: 15.8940% ( 118) 00:07:38.088 14317.095 - 14417.920: 17.3796% ( 116) 00:07:38.088 14417.920 - 14518.745: 19.0446% ( 130) 00:07:38.088 14518.745 - 14619.569: 20.6199% ( 123) 00:07:38.088 14619.569 - 14720.394: 22.4385% ( 142) 00:07:38.088 14720.394 - 14821.218: 24.3724% ( 151) 00:07:38.088 14821.218 - 14922.043: 26.3320% ( 153) 00:07:38.088 14922.043 - 15022.868: 28.3171% ( 155) 00:07:38.088 15022.868 - 15123.692: 30.1742% ( 145) 00:07:38.088 15123.692 - 15224.517: 32.1465% ( 154) 00:07:38.088 15224.517 - 15325.342: 34.0036% ( 145) 00:07:38.088 15325.342 - 15426.166: 35.9375% ( 151) 00:07:38.088 15426.166 - 15526.991: 37.8074% ( 146) 00:07:38.088 15526.991 - 15627.815: 39.4595% ( 129) 00:07:38.088 15627.815 - 15728.640: 41.1373% ( 131) 00:07:38.088 15728.640 - 15829.465: 42.7638% ( 127) 00:07:38.088 15829.465 - 15930.289: 44.4160% ( 129) 00:07:38.088 15930.289 - 16031.114: 45.9401% ( 119) 00:07:38.088 16031.114 - 16131.938: 47.4641% ( 119) 00:07:38.088 16131.938 - 16232.763: 49.0779% ( 126) 00:07:38.088 16232.763 - 16333.588: 50.6404% ( 122) 00:07:38.088 16333.588 - 16434.412: 52.1773% ( 120) 00:07:38.088 16434.412 - 16535.237: 53.8294% ( 129) 00:07:38.088 16535.237 - 16636.062: 55.4688% ( 128) 00:07:38.088 16636.062 - 16736.886: 57.2362% ( 138) 00:07:38.088 16736.886 - 16837.711: 59.3622% ( 166) 00:07:38.088 16837.711 - 16938.535: 61.3730% ( 157) 00:07:38.088 16938.535 - 17039.360: 63.3197% ( 152) 00:07:38.088 17039.360 - 17140.185: 65.3176% ( 156) 00:07:38.088 17140.185 - 17241.009: 67.5333% ( 173) 00:07:38.088 17241.009 - 17341.834: 69.8899% ( 184) 00:07:38.088 17341.834 - 17442.658: 72.1824% ( 179) 00:07:38.088 17442.658 - 17543.483: 74.2956% ( 165) 00:07:38.088 17543.483 - 17644.308: 76.2039% ( 149) 00:07:38.088 17644.308 - 17745.132: 78.2403% ( 159) 00:07:38.088 17745.132 - 17845.957: 80.0717% ( 143) 00:07:38.088 17845.957 - 17946.782: 81.8135% ( 136) 00:07:38.088 17946.782 - 18047.606: 83.3888% ( 123) 00:07:38.088 18047.606 - 18148.431: 84.7720% ( 108) 00:07:38.088 18148.431 - 18249.255: 85.9631% ( 93) 00:07:38.088 18249.255 - 18350.080: 86.9749% ( 79) 00:07:38.088 18350.080 - 18450.905: 87.8330% ( 67) 00:07:38.088 18450.905 - 18551.729: 88.5374% ( 55) 00:07:38.088 18551.729 - 18652.554: 89.2930% ( 59) 00:07:38.088 18652.554 - 18753.378: 89.8950% ( 47) 00:07:38.088 18753.378 - 18854.203: 90.6250% ( 57) 00:07:38.088 18854.203 - 18955.028: 91.3294% ( 55) 00:07:38.088 18955.028 - 19055.852: 91.9185% ( 46) 00:07:38.088 19055.852 - 19156.677: 92.5461% ( 49) 00:07:38.088 19156.677 - 19257.502: 93.1352% ( 46) 00:07:38.088 19257.502 - 19358.326: 93.7756% ( 50) 00:07:38.088 19358.326 - 19459.151: 94.4288% ( 51) 00:07:38.088 19459.151 - 19559.975: 95.0307% ( 47) 00:07:38.088 19559.975 - 19660.800: 95.6071% ( 45) 00:07:38.088 19660.800 - 19761.625: 96.1322% ( 41) 00:07:38.088 19761.625 - 19862.449: 96.5548% ( 33) 00:07:38.088 19862.449 - 19963.274: 96.9006% ( 27) 00:07:38.088 19963.274 - 20064.098: 97.1440% ( 19) 00:07:38.088 20064.098 - 20164.923: 97.3361% ( 15) 00:07:38.088 20164.923 - 20265.748: 97.5666% ( 18) 00:07:38.088 20265.748 - 20366.572: 97.7075% ( 11) 00:07:38.088 20366.572 - 20467.397: 97.8484% ( 11) 00:07:38.089 20467.397 - 20568.222: 97.9508% ( 8) 00:07:38.089 20568.222 - 20669.046: 98.0277% ( 6) 00:07:38.089 20669.046 - 20769.871: 98.1173% ( 7) 00:07:38.089 20769.871 - 20870.695: 98.1942% ( 6) 00:07:38.089 20870.695 - 20971.520: 98.2582% ( 5) 00:07:38.089 20971.520 - 21072.345: 98.3222% ( 5) 00:07:38.089 21072.345 - 21173.169: 98.3607% ( 3) 00:07:38.089 25811.102 - 26012.751: 98.4503% ( 7) 00:07:38.089 26012.751 - 26214.400: 98.5528% ( 8) 00:07:38.089 26214.400 - 26416.049: 98.6680% ( 9) 00:07:38.089 26416.049 - 26617.698: 98.7705% ( 8) 00:07:38.089 26617.698 - 26819.348: 98.8730% ( 8) 00:07:38.089 26819.348 - 27020.997: 98.9754% ( 8) 00:07:38.089 27020.997 - 27222.646: 99.0907% ( 9) 00:07:38.089 27222.646 - 27424.295: 99.1803% ( 7) 00:07:38.089 32667.175 - 32868.825: 99.1931% ( 1) 00:07:38.089 32868.825 - 33070.474: 99.2828% ( 7) 00:07:38.089 33070.474 - 33272.123: 99.3724% ( 7) 00:07:38.089 33272.123 - 33473.772: 99.4621% ( 7) 00:07:38.089 33473.772 - 33675.422: 99.5517% ( 7) 00:07:38.089 33675.422 - 33877.071: 99.6542% ( 8) 00:07:38.089 33877.071 - 34078.720: 99.7439% ( 7) 00:07:38.089 34078.720 - 34280.369: 99.8335% ( 7) 00:07:38.089 34280.369 - 34482.018: 99.9232% ( 7) 00:07:38.089 34482.018 - 34683.668: 100.0000% ( 6) 00:07:38.089 00:07:38.089 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:38.089 ============================================================================== 00:07:38.089 Range in us Cumulative IO count 00:07:38.089 10939.471 - 10989.883: 0.0256% ( 2) 00:07:38.089 10989.883 - 11040.295: 0.0640% ( 3) 00:07:38.089 11040.295 - 11090.708: 0.1153% ( 4) 00:07:38.089 11090.708 - 11141.120: 0.1281% ( 1) 00:07:38.089 11141.120 - 11191.532: 0.1409% ( 1) 00:07:38.089 11191.532 - 11241.945: 0.1665% ( 2) 00:07:38.089 11241.945 - 11292.357: 0.1793% ( 1) 00:07:38.089 11292.357 - 11342.769: 0.2049% ( 2) 00:07:38.089 11342.769 - 11393.182: 0.2305% ( 2) 00:07:38.089 11393.182 - 11443.594: 0.2561% ( 2) 00:07:38.089 11443.594 - 11494.006: 0.2946% ( 3) 00:07:38.089 11494.006 - 11544.418: 0.3586% ( 5) 00:07:38.089 11544.418 - 11594.831: 0.4098% ( 4) 00:07:38.089 11594.831 - 11645.243: 0.5123% ( 8) 00:07:38.089 11645.243 - 11695.655: 0.6276% ( 9) 00:07:38.089 11695.655 - 11746.068: 0.6788% ( 4) 00:07:38.089 11746.068 - 11796.480: 0.7300% ( 4) 00:07:38.089 11796.480 - 11846.892: 0.7812% ( 4) 00:07:38.089 11846.892 - 11897.305: 0.8069% ( 2) 00:07:38.089 11897.305 - 11947.717: 0.8709% ( 5) 00:07:38.089 11947.717 - 11998.129: 0.9221% ( 4) 00:07:38.089 11998.129 - 12048.542: 0.9990% ( 6) 00:07:38.089 12048.542 - 12098.954: 1.0502% ( 4) 00:07:38.089 12098.954 - 12149.366: 1.1142% ( 5) 00:07:38.089 12149.366 - 12199.778: 1.1783% ( 5) 00:07:38.089 12199.778 - 12250.191: 1.2551% ( 6) 00:07:38.089 12250.191 - 12300.603: 1.3576% ( 8) 00:07:38.089 12300.603 - 12351.015: 1.4600% ( 8) 00:07:38.089 12351.015 - 12401.428: 1.5625% ( 8) 00:07:38.089 12401.428 - 12451.840: 1.6650% ( 8) 00:07:38.089 12451.840 - 12502.252: 1.7418% ( 6) 00:07:38.089 12502.252 - 12552.665: 1.8315% ( 7) 00:07:38.089 12552.665 - 12603.077: 1.9083% ( 6) 00:07:38.089 12603.077 - 12653.489: 2.0108% ( 8) 00:07:38.089 12653.489 - 12703.902: 2.1260% ( 9) 00:07:38.089 12703.902 - 12754.314: 2.2285% ( 8) 00:07:38.089 12754.314 - 12804.726: 2.3181% ( 7) 00:07:38.089 12804.726 - 12855.138: 2.4846% ( 13) 00:07:38.089 12855.138 - 12905.551: 2.6767% ( 15) 00:07:38.089 12905.551 - 13006.375: 3.0738% ( 31) 00:07:38.089 13006.375 - 13107.200: 3.5220% ( 35) 00:07:38.089 13107.200 - 13208.025: 4.0215% ( 39) 00:07:38.089 13208.025 - 13308.849: 4.4698% ( 35) 00:07:38.089 13308.849 - 13409.674: 4.9436% ( 37) 00:07:38.089 13409.674 - 13510.498: 5.4816% ( 42) 00:07:38.089 13510.498 - 13611.323: 6.3012% ( 64) 00:07:38.089 13611.323 - 13712.148: 7.0697% ( 60) 00:07:38.089 13712.148 - 13812.972: 7.9534% ( 69) 00:07:38.089 13812.972 - 13913.797: 8.9780% ( 80) 00:07:38.089 13913.797 - 14014.622: 10.0154% ( 81) 00:07:38.089 14014.622 - 14115.446: 11.1808% ( 91) 00:07:38.089 14115.446 - 14216.271: 12.4488% ( 99) 00:07:38.089 14216.271 - 14317.095: 14.0113% ( 122) 00:07:38.089 14317.095 - 14417.920: 15.7275% ( 134) 00:07:38.089 14417.920 - 14518.745: 17.5461% ( 142) 00:07:38.089 14518.745 - 14619.569: 19.3391% ( 140) 00:07:38.089 14619.569 - 14720.394: 21.0809% ( 136) 00:07:38.089 14720.394 - 14821.218: 22.6819% ( 125) 00:07:38.089 14821.218 - 14922.043: 24.4237% ( 136) 00:07:38.089 14922.043 - 15022.868: 26.2551% ( 143) 00:07:38.089 15022.868 - 15123.692: 28.2659% ( 157) 00:07:38.089 15123.692 - 15224.517: 30.4688% ( 172) 00:07:38.089 15224.517 - 15325.342: 32.8637% ( 187) 00:07:38.089 15325.342 - 15426.166: 34.8745% ( 157) 00:07:38.089 15426.166 - 15526.991: 36.6675% ( 140) 00:07:38.089 15526.991 - 15627.815: 38.4221% ( 137) 00:07:38.089 15627.815 - 15728.640: 40.0231% ( 125) 00:07:38.089 15728.640 - 15829.465: 41.8161% ( 140) 00:07:38.089 15829.465 - 15930.289: 43.9421% ( 166) 00:07:38.089 15930.289 - 16031.114: 45.8760% ( 151) 00:07:38.089 16031.114 - 16131.938: 47.7715% ( 148) 00:07:38.089 16131.938 - 16232.763: 49.7823% ( 157) 00:07:38.089 16232.763 - 16333.588: 51.8315% ( 160) 00:07:38.089 16333.588 - 16434.412: 54.1368% ( 180) 00:07:38.089 16434.412 - 16535.237: 56.3525% ( 173) 00:07:38.089 16535.237 - 16636.062: 58.7090% ( 184) 00:07:38.089 16636.062 - 16736.886: 61.0015% ( 179) 00:07:38.089 16736.886 - 16837.711: 63.4349% ( 190) 00:07:38.089 16837.711 - 16938.535: 65.6890% ( 176) 00:07:38.089 16938.535 - 17039.360: 67.9944% ( 180) 00:07:38.089 17039.360 - 17140.185: 70.1204% ( 166) 00:07:38.089 17140.185 - 17241.009: 72.0799% ( 153) 00:07:38.089 17241.009 - 17341.834: 73.7833% ( 133) 00:07:38.089 17341.834 - 17442.658: 75.4739% ( 132) 00:07:38.089 17442.658 - 17543.483: 76.9211% ( 113) 00:07:38.089 17543.483 - 17644.308: 78.4324% ( 118) 00:07:38.089 17644.308 - 17745.132: 79.9308% ( 117) 00:07:38.089 17745.132 - 17845.957: 81.2244% ( 101) 00:07:38.089 17845.957 - 17946.782: 82.4411% ( 95) 00:07:38.089 17946.782 - 18047.606: 83.2608% ( 64) 00:07:38.089 18047.606 - 18148.431: 84.0548% ( 62) 00:07:38.089 18148.431 - 18249.255: 84.9513% ( 70) 00:07:38.089 18249.255 - 18350.080: 85.8863% ( 73) 00:07:38.089 18350.080 - 18450.905: 86.8340% ( 74) 00:07:38.089 18450.905 - 18551.729: 87.5897% ( 59) 00:07:38.089 18551.729 - 18652.554: 88.2812% ( 54) 00:07:38.089 18652.554 - 18753.378: 88.9472% ( 52) 00:07:38.089 18753.378 - 18854.203: 89.6901% ( 58) 00:07:38.089 18854.203 - 18955.028: 90.4329% ( 58) 00:07:38.089 18955.028 - 19055.852: 91.1245% ( 54) 00:07:38.089 19055.852 - 19156.677: 91.7520% ( 49) 00:07:38.089 19156.677 - 19257.502: 92.3284% ( 45) 00:07:38.089 19257.502 - 19358.326: 92.9431% ( 48) 00:07:38.089 19358.326 - 19459.151: 93.4426% ( 39) 00:07:38.089 19459.151 - 19559.975: 93.8268% ( 30) 00:07:38.089 19559.975 - 19660.800: 94.2623% ( 34) 00:07:38.089 19660.800 - 19761.625: 94.7234% ( 36) 00:07:38.089 19761.625 - 19862.449: 95.2357% ( 40) 00:07:38.089 19862.449 - 19963.274: 95.6583% ( 33) 00:07:38.089 19963.274 - 20064.098: 96.0809% ( 33) 00:07:38.089 20064.098 - 20164.923: 96.2859% ( 16) 00:07:38.089 20164.923 - 20265.748: 96.4780% ( 15) 00:07:38.089 20265.748 - 20366.572: 96.7469% ( 21) 00:07:38.089 20366.572 - 20467.397: 97.0415% ( 23) 00:07:38.089 20467.397 - 20568.222: 97.3105% ( 21) 00:07:38.089 20568.222 - 20669.046: 97.5922% ( 22) 00:07:38.089 20669.046 - 20769.871: 97.8484% ( 20) 00:07:38.089 20769.871 - 20870.695: 97.9892% ( 11) 00:07:38.089 20870.695 - 20971.520: 98.1301% ( 11) 00:07:38.089 20971.520 - 21072.345: 98.2070% ( 6) 00:07:38.089 21072.345 - 21173.169: 98.2966% ( 7) 00:07:38.089 21173.169 - 21273.994: 98.3607% ( 5) 00:07:38.089 25004.505 - 25105.329: 98.3863% ( 2) 00:07:38.089 25105.329 - 25206.154: 98.4375% ( 4) 00:07:38.089 25206.154 - 25306.978: 98.4887% ( 4) 00:07:38.089 25306.978 - 25407.803: 98.5400% ( 4) 00:07:38.089 25407.803 - 25508.628: 98.6040% ( 5) 00:07:38.089 25508.628 - 25609.452: 98.6552% ( 4) 00:07:38.089 25609.452 - 25710.277: 98.7065% ( 4) 00:07:38.089 25710.277 - 25811.102: 98.7577% ( 4) 00:07:38.089 25811.102 - 26012.751: 98.8601% ( 8) 00:07:38.089 26012.751 - 26214.400: 98.9754% ( 9) 00:07:38.089 26214.400 - 26416.049: 99.0779% ( 8) 00:07:38.089 26416.049 - 26617.698: 99.1803% ( 8) 00:07:38.089 32667.175 - 32868.825: 99.1931% ( 1) 00:07:38.089 32868.825 - 33070.474: 99.2828% ( 7) 00:07:38.089 33070.474 - 33272.123: 99.3852% ( 8) 00:07:38.089 33272.123 - 33473.772: 99.4749% ( 7) 00:07:38.089 33473.772 - 33675.422: 99.5774% ( 8) 00:07:38.089 33675.422 - 33877.071: 99.6542% ( 6) 00:07:38.089 33877.071 - 34078.720: 99.7567% ( 8) 00:07:38.089 34078.720 - 34280.369: 99.8463% ( 7) 00:07:38.089 34280.369 - 34482.018: 99.9360% ( 7) 00:07:38.089 34482.018 - 34683.668: 100.0000% ( 5) 00:07:38.089 00:07:38.089 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:38.089 ============================================================================== 00:07:38.089 Range in us Cumulative IO count 00:07:38.089 10889.058 - 10939.471: 0.1153% ( 9) 00:07:38.089 10939.471 - 10989.883: 0.1665% ( 4) 00:07:38.089 10989.883 - 11040.295: 0.2305% ( 5) 00:07:38.089 11040.295 - 11090.708: 0.2561% ( 2) 00:07:38.089 11090.708 - 11141.120: 0.2818% ( 2) 00:07:38.089 11141.120 - 11191.532: 0.3202% ( 3) 00:07:38.089 11191.532 - 11241.945: 0.3714% ( 4) 00:07:38.089 11241.945 - 11292.357: 0.4483% ( 6) 00:07:38.090 11292.357 - 11342.769: 0.5379% ( 7) 00:07:38.090 11342.769 - 11393.182: 0.6276% ( 7) 00:07:38.090 11393.182 - 11443.594: 0.7172% ( 7) 00:07:38.090 11443.594 - 11494.006: 0.7428% ( 2) 00:07:38.090 11494.006 - 11544.418: 0.8197% ( 6) 00:07:38.090 11544.418 - 11594.831: 0.8581% ( 3) 00:07:38.090 11594.831 - 11645.243: 0.8965% ( 3) 00:07:38.090 11645.243 - 11695.655: 0.9349% ( 3) 00:07:38.090 11695.655 - 11746.068: 0.9862% ( 4) 00:07:38.090 11746.068 - 11796.480: 1.0374% ( 4) 00:07:38.090 11796.480 - 11846.892: 1.0886% ( 4) 00:07:38.090 11846.892 - 11897.305: 1.1527% ( 5) 00:07:38.090 11897.305 - 11947.717: 1.2039% ( 4) 00:07:38.090 11947.717 - 11998.129: 1.2679% ( 5) 00:07:38.090 11998.129 - 12048.542: 1.3320% ( 5) 00:07:38.090 12048.542 - 12098.954: 1.3960% ( 5) 00:07:38.090 12098.954 - 12149.366: 1.4728% ( 6) 00:07:38.090 12149.366 - 12199.778: 1.5241% ( 4) 00:07:38.090 12199.778 - 12250.191: 1.5753% ( 4) 00:07:38.090 12250.191 - 12300.603: 1.6393% ( 5) 00:07:38.090 12300.603 - 12351.015: 1.6906% ( 4) 00:07:38.090 12351.015 - 12401.428: 1.7546% ( 5) 00:07:38.090 12401.428 - 12451.840: 1.8058% ( 4) 00:07:38.090 12451.840 - 12502.252: 1.8699% ( 5) 00:07:38.090 12502.252 - 12552.665: 1.9083% ( 3) 00:07:38.090 12552.665 - 12603.077: 1.9595% ( 4) 00:07:38.090 12603.077 - 12653.489: 1.9980% ( 3) 00:07:38.090 12653.489 - 12703.902: 2.0748% ( 6) 00:07:38.090 12703.902 - 12754.314: 2.1644% ( 7) 00:07:38.090 12754.314 - 12804.726: 2.3053% ( 11) 00:07:38.090 12804.726 - 12855.138: 2.4206% ( 9) 00:07:38.090 12855.138 - 12905.551: 2.5999% ( 14) 00:07:38.090 12905.551 - 13006.375: 2.9457% ( 27) 00:07:38.090 13006.375 - 13107.200: 3.3683% ( 33) 00:07:38.090 13107.200 - 13208.025: 3.9062% ( 42) 00:07:38.090 13208.025 - 13308.849: 4.4826% ( 45) 00:07:38.090 13308.849 - 13409.674: 5.1998% ( 56) 00:07:38.090 13409.674 - 13510.498: 5.8145% ( 48) 00:07:38.090 13510.498 - 13611.323: 6.5190% ( 55) 00:07:38.090 13611.323 - 13712.148: 7.1977% ( 53) 00:07:38.090 13712.148 - 13812.972: 8.0046% ( 63) 00:07:38.090 13812.972 - 13913.797: 8.9267% ( 72) 00:07:38.090 13913.797 - 14014.622: 10.1050% ( 92) 00:07:38.090 14014.622 - 14115.446: 11.2065% ( 86) 00:07:38.090 14115.446 - 14216.271: 12.4103% ( 94) 00:07:38.090 14216.271 - 14317.095: 13.6527% ( 97) 00:07:38.090 14317.095 - 14417.920: 15.1255% ( 115) 00:07:38.090 14417.920 - 14518.745: 16.6112% ( 116) 00:07:38.090 14518.745 - 14619.569: 18.1737% ( 122) 00:07:38.090 14619.569 - 14720.394: 20.0179% ( 144) 00:07:38.090 14720.394 - 14821.218: 21.8238% ( 141) 00:07:38.090 14821.218 - 14922.043: 23.7449% ( 150) 00:07:38.090 14922.043 - 15022.868: 25.5507% ( 141) 00:07:38.090 15022.868 - 15123.692: 27.3694% ( 142) 00:07:38.090 15123.692 - 15224.517: 29.3545% ( 155) 00:07:38.090 15224.517 - 15325.342: 31.3397% ( 155) 00:07:38.090 15325.342 - 15426.166: 33.4785% ( 167) 00:07:38.090 15426.166 - 15526.991: 35.7198% ( 175) 00:07:38.090 15526.991 - 15627.815: 37.7433% ( 158) 00:07:38.090 15627.815 - 15728.640: 39.7285% ( 155) 00:07:38.090 15728.640 - 15829.465: 41.8289% ( 164) 00:07:38.090 15829.465 - 15930.289: 44.1214% ( 179) 00:07:38.090 15930.289 - 16031.114: 46.5804% ( 192) 00:07:38.090 16031.114 - 16131.938: 48.8473% ( 177) 00:07:38.090 16131.938 - 16232.763: 51.0502% ( 172) 00:07:38.090 16232.763 - 16333.588: 53.0097% ( 153) 00:07:38.090 16333.588 - 16434.412: 54.7387% ( 135) 00:07:38.090 16434.412 - 16535.237: 56.6342% ( 148) 00:07:38.090 16535.237 - 16636.062: 58.6322% ( 156) 00:07:38.090 16636.062 - 16736.886: 60.8094% ( 170) 00:07:38.090 16736.886 - 16837.711: 62.8714% ( 161) 00:07:38.090 16837.711 - 16938.535: 65.1511% ( 178) 00:07:38.090 16938.535 - 17039.360: 67.2259% ( 162) 00:07:38.090 17039.360 - 17140.185: 69.2751% ( 160) 00:07:38.090 17140.185 - 17241.009: 70.8760% ( 125) 00:07:38.090 17241.009 - 17341.834: 72.4641% ( 124) 00:07:38.090 17341.834 - 17442.658: 74.0523% ( 124) 00:07:38.090 17442.658 - 17543.483: 75.5763% ( 119) 00:07:38.090 17543.483 - 17644.308: 77.1260% ( 121) 00:07:38.090 17644.308 - 17745.132: 78.6373% ( 118) 00:07:38.090 17745.132 - 17845.957: 80.0077% ( 107) 00:07:38.090 17845.957 - 17946.782: 81.2628% ( 98) 00:07:38.090 17946.782 - 18047.606: 82.5820% ( 103) 00:07:38.090 18047.606 - 18148.431: 83.9652% ( 108) 00:07:38.090 18148.431 - 18249.255: 85.4508% ( 116) 00:07:38.090 18249.255 - 18350.080: 86.6035% ( 90) 00:07:38.090 18350.080 - 18450.905: 87.6409% ( 81) 00:07:38.090 18450.905 - 18551.729: 88.7039% ( 83) 00:07:38.090 18551.729 - 18652.554: 89.8438% ( 89) 00:07:38.090 18652.554 - 18753.378: 90.6762% ( 65) 00:07:38.090 18753.378 - 18854.203: 91.4959% ( 64) 00:07:38.090 18854.203 - 18955.028: 92.2900% ( 62) 00:07:38.090 18955.028 - 19055.852: 93.0584% ( 60) 00:07:38.090 19055.852 - 19156.677: 93.6475% ( 46) 00:07:38.090 19156.677 - 19257.502: 94.1086% ( 36) 00:07:38.090 19257.502 - 19358.326: 94.5441% ( 34) 00:07:38.090 19358.326 - 19459.151: 94.9283% ( 30) 00:07:38.090 19459.151 - 19559.975: 95.3637% ( 34) 00:07:38.090 19559.975 - 19660.800: 95.6711% ( 24) 00:07:38.090 19660.800 - 19761.625: 95.9785% ( 24) 00:07:38.090 19761.625 - 19862.449: 96.2346% ( 20) 00:07:38.090 19862.449 - 19963.274: 96.4780% ( 19) 00:07:38.090 19963.274 - 20064.098: 96.6573% ( 14) 00:07:38.090 20064.098 - 20164.923: 96.7213% ( 5) 00:07:38.090 20467.397 - 20568.222: 96.8366% ( 9) 00:07:38.090 20568.222 - 20669.046: 97.0799% ( 19) 00:07:38.090 20669.046 - 20769.871: 97.2208% ( 11) 00:07:38.090 20769.871 - 20870.695: 97.3873% ( 13) 00:07:38.090 20870.695 - 20971.520: 97.5410% ( 12) 00:07:38.090 20971.520 - 21072.345: 97.6947% ( 12) 00:07:38.090 21072.345 - 21173.169: 97.8612% ( 13) 00:07:38.090 21173.169 - 21273.994: 98.0405% ( 14) 00:07:38.090 21273.994 - 21374.818: 98.2070% ( 13) 00:07:38.090 21374.818 - 21475.643: 98.3350% ( 10) 00:07:38.090 21475.643 - 21576.468: 98.3607% ( 2) 00:07:38.090 23492.135 - 23592.960: 98.4119% ( 4) 00:07:38.090 23592.960 - 23693.785: 98.4503% ( 3) 00:07:38.090 23693.785 - 23794.609: 98.5143% ( 5) 00:07:38.090 23794.609 - 23895.434: 98.5656% ( 4) 00:07:38.090 23895.434 - 23996.258: 98.6168% ( 4) 00:07:38.090 23996.258 - 24097.083: 98.6808% ( 5) 00:07:38.090 24097.083 - 24197.908: 98.7321% ( 4) 00:07:38.090 24197.908 - 24298.732: 98.7833% ( 4) 00:07:38.090 24298.732 - 24399.557: 98.8345% ( 4) 00:07:38.090 24399.557 - 24500.382: 98.8858% ( 4) 00:07:38.090 24500.382 - 24601.206: 98.9498% ( 5) 00:07:38.090 24601.206 - 24702.031: 99.0010% ( 4) 00:07:38.090 24702.031 - 24802.855: 99.0523% ( 4) 00:07:38.090 24802.855 - 24903.680: 99.1035% ( 4) 00:07:38.090 24903.680 - 25004.505: 99.1547% ( 4) 00:07:38.090 25004.505 - 25105.329: 99.1803% ( 2) 00:07:38.090 31658.929 - 31860.578: 99.1931% ( 1) 00:07:38.090 31860.578 - 32062.228: 99.2572% ( 5) 00:07:38.090 32062.228 - 32263.877: 99.3340% ( 6) 00:07:38.090 32263.877 - 32465.526: 99.3981% ( 5) 00:07:38.090 32465.526 - 32667.175: 99.4621% ( 5) 00:07:38.090 32667.175 - 32868.825: 99.5389% ( 6) 00:07:38.090 32868.825 - 33070.474: 99.6030% ( 5) 00:07:38.090 33070.474 - 33272.123: 99.6926% ( 7) 00:07:38.090 33272.123 - 33473.772: 99.7823% ( 7) 00:07:38.090 33473.772 - 33675.422: 99.8719% ( 7) 00:07:38.090 33675.422 - 33877.071: 99.9744% ( 8) 00:07:38.090 33877.071 - 34078.720: 100.0000% ( 2) 00:07:38.090 00:07:38.090 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:38.090 ============================================================================== 00:07:38.090 Range in us Cumulative IO count 00:07:38.090 10284.111 - 10334.523: 0.0508% ( 4) 00:07:38.090 10334.523 - 10384.935: 0.0762% ( 2) 00:07:38.090 10384.935 - 10435.348: 0.0889% ( 1) 00:07:38.090 10435.348 - 10485.760: 0.1143% ( 2) 00:07:38.090 10485.760 - 10536.172: 0.1397% ( 2) 00:07:38.090 10536.172 - 10586.585: 0.1651% ( 2) 00:07:38.090 10586.585 - 10636.997: 0.1905% ( 2) 00:07:38.090 10636.997 - 10687.409: 0.2160% ( 2) 00:07:38.090 10687.409 - 10737.822: 0.2414% ( 2) 00:07:38.090 10737.822 - 10788.234: 0.2668% ( 2) 00:07:38.090 10788.234 - 10838.646: 0.2922% ( 2) 00:07:38.090 10838.646 - 10889.058: 0.3176% ( 2) 00:07:38.090 10889.058 - 10939.471: 0.3430% ( 2) 00:07:38.090 10939.471 - 10989.883: 0.3684% ( 2) 00:07:38.090 10989.883 - 11040.295: 0.3938% ( 2) 00:07:38.090 11040.295 - 11090.708: 0.4192% ( 2) 00:07:38.090 11090.708 - 11141.120: 0.4446% ( 2) 00:07:38.090 11141.120 - 11191.532: 0.4700% ( 2) 00:07:38.090 11191.532 - 11241.945: 0.4954% ( 2) 00:07:38.090 11241.945 - 11292.357: 0.5208% ( 2) 00:07:38.090 11292.357 - 11342.769: 0.5462% ( 2) 00:07:38.090 11342.769 - 11393.182: 0.5843% ( 3) 00:07:38.090 11393.182 - 11443.594: 0.6098% ( 2) 00:07:38.090 11443.594 - 11494.006: 0.6479% ( 3) 00:07:38.090 11494.006 - 11544.418: 0.6860% ( 3) 00:07:38.090 11544.418 - 11594.831: 0.8003% ( 9) 00:07:38.090 11594.831 - 11645.243: 0.8511% ( 4) 00:07:38.090 11645.243 - 11695.655: 0.9146% ( 5) 00:07:38.090 11695.655 - 11746.068: 1.0163% ( 8) 00:07:38.090 11746.068 - 11796.480: 1.0544% ( 3) 00:07:38.090 11796.480 - 11846.892: 1.1052% ( 4) 00:07:38.090 11846.892 - 11897.305: 1.1433% ( 3) 00:07:38.090 11897.305 - 11947.717: 1.1814% ( 3) 00:07:38.090 11947.717 - 11998.129: 1.2068% ( 2) 00:07:38.090 11998.129 - 12048.542: 1.2830% ( 6) 00:07:38.090 12048.542 - 12098.954: 1.3847% ( 8) 00:07:38.090 12098.954 - 12149.366: 1.4990% ( 9) 00:07:38.090 12149.366 - 12199.778: 1.6133% ( 9) 00:07:38.091 12199.778 - 12250.191: 1.8039% ( 15) 00:07:38.091 12250.191 - 12300.603: 1.9436% ( 11) 00:07:38.091 12300.603 - 12351.015: 2.1087% ( 13) 00:07:38.091 12351.015 - 12401.428: 2.2612% ( 12) 00:07:38.091 12401.428 - 12451.840: 2.4390% ( 14) 00:07:38.091 12451.840 - 12502.252: 2.5661% ( 10) 00:07:38.091 12502.252 - 12552.665: 2.7439% ( 14) 00:07:38.091 12552.665 - 12603.077: 2.8709% ( 10) 00:07:38.091 12603.077 - 12653.489: 2.9980% ( 10) 00:07:38.091 12653.489 - 12703.902: 3.0996% ( 8) 00:07:38.091 12703.902 - 12754.314: 3.2012% ( 8) 00:07:38.091 12754.314 - 12804.726: 3.3028% ( 8) 00:07:38.091 12804.726 - 12855.138: 3.4299% ( 10) 00:07:38.091 12855.138 - 12905.551: 3.6077% ( 14) 00:07:38.091 12905.551 - 13006.375: 3.9634% ( 28) 00:07:38.091 13006.375 - 13107.200: 4.3572% ( 31) 00:07:38.091 13107.200 - 13208.025: 4.8908% ( 42) 00:07:38.091 13208.025 - 13308.849: 5.3227% ( 34) 00:07:38.091 13308.849 - 13409.674: 5.8689% ( 43) 00:07:38.091 13409.674 - 13510.498: 6.7708% ( 71) 00:07:38.091 13510.498 - 13611.323: 7.6728% ( 71) 00:07:38.091 13611.323 - 13712.148: 8.6001% ( 73) 00:07:38.091 13712.148 - 13812.972: 9.6037% ( 79) 00:07:38.091 13812.972 - 13913.797: 10.8486% ( 98) 00:07:38.091 13913.797 - 14014.622: 12.0046% ( 91) 00:07:38.091 14014.622 - 14115.446: 13.2495% ( 98) 00:07:38.091 14115.446 - 14216.271: 14.7612% ( 119) 00:07:38.091 14216.271 - 14317.095: 16.2475% ( 117) 00:07:38.091 14317.095 - 14417.920: 17.5305% ( 101) 00:07:38.091 14417.920 - 14518.745: 18.9151% ( 109) 00:07:38.091 14518.745 - 14619.569: 20.3252% ( 111) 00:07:38.091 14619.569 - 14720.394: 21.8623% ( 121) 00:07:38.091 14720.394 - 14821.218: 23.5518% ( 133) 00:07:38.091 14821.218 - 14922.043: 25.3176% ( 139) 00:07:38.091 14922.043 - 15022.868: 27.2358% ( 151) 00:07:38.091 15022.868 - 15123.692: 29.1540% ( 151) 00:07:38.091 15123.692 - 15224.517: 31.1738% ( 159) 00:07:38.091 15224.517 - 15325.342: 33.1301% ( 154) 00:07:38.091 15325.342 - 15426.166: 35.1753% ( 161) 00:07:38.091 15426.166 - 15526.991: 37.1189% ( 153) 00:07:38.091 15526.991 - 15627.815: 39.1260% ( 158) 00:07:38.091 15627.815 - 15728.640: 41.2475% ( 167) 00:07:38.091 15728.640 - 15829.465: 43.1148% ( 147) 00:07:38.091 15829.465 - 15930.289: 44.8679% ( 138) 00:07:38.091 15930.289 - 16031.114: 46.5193% ( 130) 00:07:38.091 16031.114 - 16131.938: 48.1834% ( 131) 00:07:38.091 16131.938 - 16232.763: 50.0889% ( 150) 00:07:38.091 16232.763 - 16333.588: 52.0452% ( 154) 00:07:38.091 16333.588 - 16434.412: 53.7983% ( 138) 00:07:38.091 16434.412 - 16535.237: 55.8181% ( 159) 00:07:38.091 16535.237 - 16636.062: 57.9522% ( 168) 00:07:38.091 16636.062 - 16736.886: 59.7561% ( 142) 00:07:38.091 16736.886 - 16837.711: 61.5981% ( 145) 00:07:38.091 16837.711 - 16938.535: 63.5163% ( 151) 00:07:38.091 16938.535 - 17039.360: 65.5996% ( 164) 00:07:38.091 17039.360 - 17140.185: 67.4543% ( 146) 00:07:38.091 17140.185 - 17241.009: 69.3471% ( 149) 00:07:38.091 17241.009 - 17341.834: 71.2652% ( 151) 00:07:38.091 17341.834 - 17442.658: 73.1072% ( 145) 00:07:38.091 17442.658 - 17543.483: 74.7967% ( 133) 00:07:38.091 17543.483 - 17644.308: 76.4482% ( 130) 00:07:38.091 17644.308 - 17745.132: 77.8836% ( 113) 00:07:38.091 17745.132 - 17845.957: 79.2429% ( 107) 00:07:38.091 17845.957 - 17946.782: 80.6402% ( 110) 00:07:38.091 17946.782 - 18047.606: 82.0757% ( 113) 00:07:38.091 18047.606 - 18148.431: 83.6636% ( 125) 00:07:38.091 18148.431 - 18249.255: 85.0229% ( 107) 00:07:38.091 18249.255 - 18350.080: 86.4075% ( 109) 00:07:38.091 18350.080 - 18450.905: 87.8557% ( 114) 00:07:38.091 18450.905 - 18551.729: 88.8847% ( 81) 00:07:38.091 18551.729 - 18652.554: 89.8120% ( 73) 00:07:38.091 18652.554 - 18753.378: 90.5869% ( 61) 00:07:38.091 18753.378 - 18854.203: 91.5396% ( 75) 00:07:38.091 18854.203 - 18955.028: 92.5559% ( 80) 00:07:38.091 18955.028 - 19055.852: 93.5340% ( 77) 00:07:38.091 19055.852 - 19156.677: 94.3979% ( 68) 00:07:38.091 19156.677 - 19257.502: 95.0457% ( 51) 00:07:38.091 19257.502 - 19358.326: 95.6301% ( 46) 00:07:38.091 19358.326 - 19459.151: 96.1509% ( 41) 00:07:38.091 19459.151 - 19559.975: 96.5955% ( 35) 00:07:38.091 19559.975 - 19660.800: 96.9766% ( 30) 00:07:38.091 19660.800 - 19761.625: 97.3069% ( 26) 00:07:38.091 19761.625 - 19862.449: 97.5991% ( 23) 00:07:38.091 19862.449 - 19963.274: 97.7769% ( 14) 00:07:38.091 19963.274 - 20064.098: 97.9548% ( 14) 00:07:38.091 20064.098 - 20164.923: 98.0564% ( 8) 00:07:38.091 20164.923 - 20265.748: 98.1453% ( 7) 00:07:38.091 20265.748 - 20366.572: 98.2215% ( 6) 00:07:38.091 20366.572 - 20467.397: 98.2978% ( 6) 00:07:38.091 20467.397 - 20568.222: 98.3613% ( 5) 00:07:38.091 20568.222 - 20669.046: 98.3867% ( 2) 00:07:38.091 20669.046 - 20769.871: 98.4375% ( 4) 00:07:38.091 20769.871 - 20870.695: 98.4883% ( 4) 00:07:38.091 20870.695 - 20971.520: 98.5264% ( 3) 00:07:38.091 20971.520 - 21072.345: 98.5645% ( 3) 00:07:38.091 21072.345 - 21173.169: 98.6026% ( 3) 00:07:38.091 21173.169 - 21273.994: 98.6535% ( 4) 00:07:38.091 21273.994 - 21374.818: 98.7551% ( 8) 00:07:38.091 21374.818 - 21475.643: 98.8313% ( 6) 00:07:38.091 21475.643 - 21576.468: 98.8821% ( 4) 00:07:38.091 21576.468 - 21677.292: 98.9583% ( 6) 00:07:38.091 21677.292 - 21778.117: 98.9837% ( 2) 00:07:38.091 21778.117 - 21878.942: 99.0600% ( 6) 00:07:38.091 21878.942 - 21979.766: 99.1235% ( 5) 00:07:38.091 21979.766 - 22080.591: 99.1870% ( 5) 00:07:38.091 23189.662 - 23290.486: 99.2251% ( 3) 00:07:38.091 23290.486 - 23391.311: 99.2759% ( 4) 00:07:38.091 23391.311 - 23492.135: 99.3267% ( 4) 00:07:38.091 23492.135 - 23592.960: 99.3775% ( 4) 00:07:38.091 23592.960 - 23693.785: 99.4411% ( 5) 00:07:38.091 23693.785 - 23794.609: 99.4919% ( 4) 00:07:38.091 23794.609 - 23895.434: 99.5427% ( 4) 00:07:38.091 23895.434 - 23996.258: 99.5935% ( 4) 00:07:38.091 23996.258 - 24097.083: 99.6570% ( 5) 00:07:38.091 24097.083 - 24197.908: 99.7078% ( 4) 00:07:38.091 24197.908 - 24298.732: 99.7586% ( 4) 00:07:38.091 24298.732 - 24399.557: 99.8095% ( 4) 00:07:38.091 24399.557 - 24500.382: 99.8730% ( 5) 00:07:38.091 24500.382 - 24601.206: 99.9238% ( 4) 00:07:38.091 24601.206 - 24702.031: 99.9746% ( 4) 00:07:38.091 24702.031 - 24802.855: 100.0000% ( 2) 00:07:38.091 00:07:38.091 19:57:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:39.038 Initializing NVMe Controllers 00:07:39.038 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.038 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.038 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.038 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.038 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.038 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.038 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.038 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.038 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.038 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.038 Initialization complete. Launching workers. 00:07:39.038 ======================================================== 00:07:39.038 Latency(us) 00:07:39.038 Device Information : IOPS MiB/s Average min max 00:07:39.038 PCIE (0000:00:13.0) NSID 1 from core 0: 9147.06 107.19 14032.04 9420.25 38394.75 00:07:39.038 PCIE (0000:00:10.0) NSID 1 from core 0: 9147.06 107.19 14013.35 9087.38 36817.23 00:07:39.038 PCIE (0000:00:11.0) NSID 1 from core 0: 9147.06 107.19 13992.97 9215.19 35044.28 00:07:39.038 PCIE (0000:00:12.0) NSID 1 from core 0: 9147.06 107.19 13973.58 9168.65 34568.72 00:07:39.038 PCIE (0000:00:12.0) NSID 2 from core 0: 9147.06 107.19 13954.39 9068.32 33515.17 00:07:39.038 PCIE (0000:00:12.0) NSID 3 from core 0: 9211.03 107.94 13838.46 9340.19 25374.99 00:07:39.038 ======================================================== 00:07:39.038 Total : 54946.33 643.90 13967.31 9068.32 38394.75 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9779.988us 00:07:39.038 10.00000% : 11342.769us 00:07:39.038 25.00000% : 12401.428us 00:07:39.038 50.00000% : 13712.148us 00:07:39.038 75.00000% : 15224.517us 00:07:39.038 90.00000% : 16434.412us 00:07:39.038 95.00000% : 17341.834us 00:07:39.038 98.00000% : 19963.274us 00:07:39.038 99.00000% : 29037.489us 00:07:39.038 99.50000% : 37305.108us 00:07:39.038 99.90000% : 38313.354us 00:07:39.038 99.99000% : 38515.003us 00:07:39.038 99.99900% : 38515.003us 00:07:39.038 99.99990% : 38515.003us 00:07:39.038 99.99999% : 38515.003us 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9880.812us 00:07:39.038 10.00000% : 11393.182us 00:07:39.038 25.00000% : 12401.428us 00:07:39.038 50.00000% : 13611.323us 00:07:39.038 75.00000% : 15325.342us 00:07:39.038 90.00000% : 16535.237us 00:07:39.038 95.00000% : 17341.834us 00:07:39.038 98.00000% : 20366.572us 00:07:39.038 99.00000% : 28029.243us 00:07:39.038 99.50000% : 35691.914us 00:07:39.038 99.90000% : 36700.160us 00:07:39.038 99.99000% : 36901.809us 00:07:39.038 99.99900% : 36901.809us 00:07:39.038 99.99990% : 36901.809us 00:07:39.038 99.99999% : 36901.809us 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9830.400us 00:07:39.038 10.00000% : 11342.769us 00:07:39.038 25.00000% : 12401.428us 00:07:39.038 50.00000% : 13611.323us 00:07:39.038 75.00000% : 15325.342us 00:07:39.038 90.00000% : 16535.237us 00:07:39.038 95.00000% : 17442.658us 00:07:39.038 98.00000% : 21072.345us 00:07:39.038 99.00000% : 26819.348us 00:07:39.038 99.50000% : 33877.071us 00:07:39.038 99.90000% : 34885.317us 00:07:39.038 99.99000% : 35086.966us 00:07:39.038 99.99900% : 35086.966us 00:07:39.038 99.99990% : 35086.966us 00:07:39.038 99.99999% : 35086.966us 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9729.575us 00:07:39.038 10.00000% : 11342.769us 00:07:39.038 25.00000% : 12351.015us 00:07:39.038 50.00000% : 13712.148us 00:07:39.038 75.00000% : 15123.692us 00:07:39.038 90.00000% : 16535.237us 00:07:39.038 95.00000% : 17543.483us 00:07:39.038 98.00000% : 21273.994us 00:07:39.038 99.00000% : 26416.049us 00:07:39.038 99.50000% : 33473.772us 00:07:39.038 99.90000% : 34482.018us 00:07:39.038 99.99000% : 34683.668us 00:07:39.038 99.99900% : 34683.668us 00:07:39.038 99.99990% : 34683.668us 00:07:39.038 99.99999% : 34683.668us 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9880.812us 00:07:39.038 10.00000% : 11141.120us 00:07:39.038 25.00000% : 12351.015us 00:07:39.038 50.00000% : 13712.148us 00:07:39.038 75.00000% : 15325.342us 00:07:39.038 90.00000% : 16535.237us 00:07:39.038 95.00000% : 17442.658us 00:07:39.038 98.00000% : 20568.222us 00:07:39.038 99.00000% : 25105.329us 00:07:39.038 99.50000% : 32465.526us 00:07:39.038 99.90000% : 33473.772us 00:07:39.038 99.99000% : 33675.422us 00:07:39.038 99.99900% : 33675.422us 00:07:39.038 99.99990% : 33675.422us 00:07:39.038 99.99999% : 33675.422us 00:07:39.038 00:07:39.038 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.038 ================================================================================= 00:07:39.038 1.00000% : 9779.988us 00:07:39.038 10.00000% : 11342.769us 00:07:39.038 25.00000% : 12351.015us 00:07:39.038 50.00000% : 13611.323us 00:07:39.038 75.00000% : 15325.342us 00:07:39.038 90.00000% : 16535.237us 00:07:39.038 95.00000% : 17341.834us 00:07:39.038 98.00000% : 18753.378us 00:07:39.038 99.00000% : 20366.572us 00:07:39.038 99.50000% : 24298.732us 00:07:39.038 99.90000% : 25206.154us 00:07:39.038 99.99000% : 25407.803us 00:07:39.038 99.99900% : 25407.803us 00:07:39.038 99.99990% : 25407.803us 00:07:39.038 99.99999% : 25407.803us 00:07:39.038 00:07:39.038 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.038 ============================================================================== 00:07:39.038 Range in us Cumulative IO count 00:07:39.038 9376.689 - 9427.102: 0.0109% ( 1) 00:07:39.038 9427.102 - 9477.514: 0.0546% ( 4) 00:07:39.038 9477.514 - 9527.926: 0.1311% ( 7) 00:07:39.038 9527.926 - 9578.338: 0.2295% ( 9) 00:07:39.038 9578.338 - 9628.751: 0.3606% ( 12) 00:07:39.038 9628.751 - 9679.163: 0.5682% ( 19) 00:07:39.038 9679.163 - 9729.575: 0.8741% ( 28) 00:07:39.039 9729.575 - 9779.988: 1.0380% ( 15) 00:07:39.039 9779.988 - 9830.400: 1.1582% ( 11) 00:07:39.039 9830.400 - 9880.812: 1.2238% ( 6) 00:07:39.039 9880.812 - 9931.225: 1.3003% ( 7) 00:07:39.039 9931.225 - 9981.637: 1.4314% ( 12) 00:07:39.039 9981.637 - 10032.049: 1.5625% ( 12) 00:07:39.039 10032.049 - 10082.462: 1.6390% ( 7) 00:07:39.039 10082.462 - 10132.874: 1.7264% ( 8) 00:07:39.039 10132.874 - 10183.286: 1.8029% ( 7) 00:07:39.039 10183.286 - 10233.698: 1.9122% ( 10) 00:07:39.039 10233.698 - 10284.111: 2.0870% ( 16) 00:07:39.039 10284.111 - 10334.523: 2.2399% ( 14) 00:07:39.039 10334.523 - 10384.935: 2.4257% ( 17) 00:07:39.039 10384.935 - 10435.348: 2.6115% ( 17) 00:07:39.039 10435.348 - 10485.760: 2.8628% ( 23) 00:07:39.039 10485.760 - 10536.172: 3.0376% ( 16) 00:07:39.039 10536.172 - 10586.585: 3.2015% ( 15) 00:07:39.039 10586.585 - 10636.997: 3.3763% ( 16) 00:07:39.039 10636.997 - 10687.409: 3.5839% ( 19) 00:07:39.039 10687.409 - 10737.822: 3.8899% ( 28) 00:07:39.039 10737.822 - 10788.234: 4.2177% ( 30) 00:07:39.039 10788.234 - 10838.646: 4.5892% ( 34) 00:07:39.039 10838.646 - 10889.058: 4.8733% ( 26) 00:07:39.039 10889.058 - 10939.471: 5.5288% ( 60) 00:07:39.039 10939.471 - 10989.883: 5.9550% ( 39) 00:07:39.039 10989.883 - 11040.295: 6.4467% ( 45) 00:07:39.039 11040.295 - 11090.708: 7.1788% ( 67) 00:07:39.039 11090.708 - 11141.120: 7.7032% ( 48) 00:07:39.039 11141.120 - 11191.532: 8.1184% ( 38) 00:07:39.039 11191.532 - 11241.945: 8.6211% ( 46) 00:07:39.039 11241.945 - 11292.357: 9.3641% ( 68) 00:07:39.039 11292.357 - 11342.769: 10.3475% ( 90) 00:07:39.039 11342.769 - 11393.182: 11.3199% ( 89) 00:07:39.039 11393.182 - 11443.594: 12.1722% ( 78) 00:07:39.039 11443.594 - 11494.006: 12.7513% ( 53) 00:07:39.039 11494.006 - 11544.418: 13.5271% ( 71) 00:07:39.039 11544.418 - 11594.831: 14.4886% ( 88) 00:07:39.039 11594.831 - 11645.243: 15.2863% ( 73) 00:07:39.039 11645.243 - 11695.655: 16.1385% ( 78) 00:07:39.039 11695.655 - 11746.068: 16.9034% ( 70) 00:07:39.039 11746.068 - 11796.480: 17.4060% ( 46) 00:07:39.039 11796.480 - 11846.892: 17.9524% ( 50) 00:07:39.039 11846.892 - 11897.305: 18.4003% ( 41) 00:07:39.039 11897.305 - 11947.717: 18.8811% ( 44) 00:07:39.039 11947.717 - 11998.129: 19.4274% ( 50) 00:07:39.039 11998.129 - 12048.542: 19.9628% ( 49) 00:07:39.039 12048.542 - 12098.954: 20.7496% ( 72) 00:07:39.039 12098.954 - 12149.366: 21.3068% ( 51) 00:07:39.039 12149.366 - 12199.778: 21.8969% ( 54) 00:07:39.039 12199.778 - 12250.191: 22.5087% ( 56) 00:07:39.039 12250.191 - 12300.603: 23.2845% ( 71) 00:07:39.039 12300.603 - 12351.015: 24.2351% ( 87) 00:07:39.039 12351.015 - 12401.428: 25.2950% ( 97) 00:07:39.039 12401.428 - 12451.840: 26.2675% ( 89) 00:07:39.039 12451.840 - 12502.252: 27.1744% ( 83) 00:07:39.039 12502.252 - 12552.665: 28.2124% ( 95) 00:07:39.039 12552.665 - 12603.077: 29.3597% ( 105) 00:07:39.039 12603.077 - 12653.489: 30.3868% ( 94) 00:07:39.039 12653.489 - 12703.902: 31.3374% ( 87) 00:07:39.039 12703.902 - 12754.314: 32.2443% ( 83) 00:07:39.039 12754.314 - 12804.726: 33.1512% ( 83) 00:07:39.039 12804.726 - 12855.138: 34.1237% ( 89) 00:07:39.039 12855.138 - 12905.551: 35.1289% ( 92) 00:07:39.039 12905.551 - 13006.375: 37.3361% ( 202) 00:07:39.039 13006.375 - 13107.200: 39.4122% ( 190) 00:07:39.039 13107.200 - 13208.025: 41.2587% ( 169) 00:07:39.039 13208.025 - 13308.849: 43.1272% ( 171) 00:07:39.039 13308.849 - 13409.674: 45.1705% ( 187) 00:07:39.039 13409.674 - 13510.498: 47.1045% ( 177) 00:07:39.039 13510.498 - 13611.323: 49.3007% ( 201) 00:07:39.039 13611.323 - 13712.148: 51.4532% ( 197) 00:07:39.039 13712.148 - 13812.972: 53.5948% ( 196) 00:07:39.039 13812.972 - 13913.797: 55.6818% ( 191) 00:07:39.039 13913.797 - 14014.622: 57.1897% ( 138) 00:07:39.039 14014.622 - 14115.446: 58.8068% ( 148) 00:07:39.039 14115.446 - 14216.271: 60.5223% ( 157) 00:07:39.039 14216.271 - 14317.095: 61.8663% ( 123) 00:07:39.039 14317.095 - 14417.920: 62.8169% ( 87) 00:07:39.039 14417.920 - 14518.745: 63.7347% ( 84) 00:07:39.039 14518.745 - 14619.569: 65.2972% ( 143) 00:07:39.039 14619.569 - 14720.394: 67.0017% ( 156) 00:07:39.039 14720.394 - 14821.218: 68.5752% ( 144) 00:07:39.039 14821.218 - 14922.043: 70.2797% ( 156) 00:07:39.039 14922.043 - 15022.868: 72.0170% ( 159) 00:07:39.039 15022.868 - 15123.692: 73.6342% ( 148) 00:07:39.039 15123.692 - 15224.517: 75.3934% ( 161) 00:07:39.039 15224.517 - 15325.342: 77.1307% ( 159) 00:07:39.039 15325.342 - 15426.166: 78.9008% ( 162) 00:07:39.039 15426.166 - 15526.991: 80.3540% ( 133) 00:07:39.039 15526.991 - 15627.815: 82.1569% ( 165) 00:07:39.039 15627.815 - 15728.640: 83.7522% ( 146) 00:07:39.039 15728.640 - 15829.465: 85.1289% ( 126) 00:07:39.039 15829.465 - 15930.289: 86.3090% ( 108) 00:07:39.039 15930.289 - 16031.114: 87.6420% ( 122) 00:07:39.039 16031.114 - 16131.938: 88.4506% ( 74) 00:07:39.039 16131.938 - 16232.763: 89.1608% ( 65) 00:07:39.039 16232.763 - 16333.588: 89.8383% ( 62) 00:07:39.039 16333.588 - 16434.412: 90.6031% ( 70) 00:07:39.039 16434.412 - 16535.237: 91.2369% ( 58) 00:07:39.039 16535.237 - 16636.062: 91.6849% ( 41) 00:07:39.039 16636.062 - 16736.886: 92.1001% ( 38) 00:07:39.039 16736.886 - 16837.711: 92.7994% ( 64) 00:07:39.039 16837.711 - 16938.535: 93.3020% ( 46) 00:07:39.039 16938.535 - 17039.360: 93.8811% ( 53) 00:07:39.039 17039.360 - 17140.185: 94.3837% ( 46) 00:07:39.039 17140.185 - 17241.009: 94.7334% ( 32) 00:07:39.039 17241.009 - 17341.834: 95.1486% ( 38) 00:07:39.039 17341.834 - 17442.658: 95.4873% ( 31) 00:07:39.039 17442.658 - 17543.483: 95.6840% ( 18) 00:07:39.039 17543.483 - 17644.308: 95.8370% ( 14) 00:07:39.039 17644.308 - 17745.132: 96.0774% ( 22) 00:07:39.039 17745.132 - 17845.957: 96.1976% ( 11) 00:07:39.039 17845.957 - 17946.782: 96.3287% ( 12) 00:07:39.039 17946.782 - 18047.606: 96.4598% ( 12) 00:07:39.039 18047.606 - 18148.431: 96.5035% ( 4) 00:07:39.039 18148.431 - 18249.255: 96.5144% ( 1) 00:07:39.039 18249.255 - 18350.080: 96.5581% ( 4) 00:07:39.039 18350.080 - 18450.905: 96.6018% ( 4) 00:07:39.039 18450.905 - 18551.729: 96.6346% ( 3) 00:07:39.039 18551.729 - 18652.554: 96.7220% ( 8) 00:07:39.039 18652.554 - 18753.378: 96.7985% ( 7) 00:07:39.039 18753.378 - 18854.203: 96.9624% ( 15) 00:07:39.039 18854.203 - 18955.028: 97.1045% ( 13) 00:07:39.039 18955.028 - 19055.852: 97.2137% ( 10) 00:07:39.039 19055.852 - 19156.677: 97.3230% ( 10) 00:07:39.039 19156.677 - 19257.502: 97.3885% ( 6) 00:07:39.039 19257.502 - 19358.326: 97.4650% ( 7) 00:07:39.039 19358.326 - 19459.151: 97.5306% ( 6) 00:07:39.039 19459.151 - 19559.975: 97.5743% ( 4) 00:07:39.039 19559.975 - 19660.800: 97.6617% ( 8) 00:07:39.039 19660.800 - 19761.625: 97.7928% ( 12) 00:07:39.039 19761.625 - 19862.449: 97.9021% ( 10) 00:07:39.039 19862.449 - 19963.274: 98.0878% ( 17) 00:07:39.039 19963.274 - 20064.098: 98.3282% ( 22) 00:07:39.039 20064.098 - 20164.923: 98.4484% ( 11) 00:07:39.039 20164.923 - 20265.748: 98.5358% ( 8) 00:07:39.039 20265.748 - 20366.572: 98.6014% ( 6) 00:07:39.039 28029.243 - 28230.892: 98.6451% ( 4) 00:07:39.039 28230.892 - 28432.542: 98.7434% ( 9) 00:07:39.039 28432.542 - 28634.191: 98.8309% ( 8) 00:07:39.039 28634.191 - 28835.840: 98.9292% ( 9) 00:07:39.039 28835.840 - 29037.489: 99.0166% ( 8) 00:07:39.039 29037.489 - 29239.138: 99.1040% ( 8) 00:07:39.039 29239.138 - 29440.788: 99.2024% ( 9) 00:07:39.039 29440.788 - 29642.437: 99.3007% ( 9) 00:07:39.039 36498.511 - 36700.160: 99.3335% ( 3) 00:07:39.039 36700.160 - 36901.809: 99.4100% ( 7) 00:07:39.039 36901.809 - 37103.458: 99.4755% ( 6) 00:07:39.039 37103.458 - 37305.108: 99.5411% ( 6) 00:07:39.039 37305.108 - 37506.757: 99.6066% ( 6) 00:07:39.039 37506.757 - 37708.406: 99.6941% ( 8) 00:07:39.039 37708.406 - 37910.055: 99.7815% ( 8) 00:07:39.039 37910.055 - 38111.705: 99.8689% ( 8) 00:07:39.039 38111.705 - 38313.354: 99.9672% ( 9) 00:07:39.039 38313.354 - 38515.003: 100.0000% ( 3) 00:07:39.039 00:07:39.039 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.039 ============================================================================== 00:07:39.039 Range in us Cumulative IO count 00:07:39.039 9074.215 - 9124.628: 0.0109% ( 1) 00:07:39.039 9175.040 - 9225.452: 0.0765% ( 6) 00:07:39.039 9225.452 - 9275.865: 0.0874% ( 1) 00:07:39.039 9275.865 - 9326.277: 0.1748% ( 8) 00:07:39.039 9326.277 - 9376.689: 0.2950% ( 11) 00:07:39.039 9376.689 - 9427.102: 0.3497% ( 5) 00:07:39.039 9427.102 - 9477.514: 0.4043% ( 5) 00:07:39.039 9477.514 - 9527.926: 0.4261% ( 2) 00:07:39.039 9527.926 - 9578.338: 0.5026% ( 7) 00:07:39.039 9578.338 - 9628.751: 0.5682% ( 6) 00:07:39.039 9628.751 - 9679.163: 0.6010% ( 3) 00:07:39.039 9679.163 - 9729.575: 0.6447% ( 4) 00:07:39.039 9729.575 - 9779.988: 0.7867% ( 13) 00:07:39.039 9779.988 - 9830.400: 0.9725% ( 17) 00:07:39.039 9830.400 - 9880.812: 1.1473% ( 16) 00:07:39.039 9880.812 - 9931.225: 1.3003% ( 14) 00:07:39.039 9931.225 - 9981.637: 1.4423% ( 13) 00:07:39.039 9981.637 - 10032.049: 1.5516% ( 10) 00:07:39.039 10032.049 - 10082.462: 1.6718% ( 11) 00:07:39.039 10082.462 - 10132.874: 1.8794% ( 19) 00:07:39.039 10132.874 - 10183.286: 2.1635% ( 26) 00:07:39.039 10183.286 - 10233.698: 2.3601% ( 18) 00:07:39.040 10233.698 - 10284.111: 2.4476% ( 8) 00:07:39.040 10284.111 - 10334.523: 2.6115% ( 15) 00:07:39.040 10334.523 - 10384.935: 2.8081% ( 18) 00:07:39.040 10384.935 - 10435.348: 3.1359% ( 30) 00:07:39.040 10435.348 - 10485.760: 3.4419% ( 28) 00:07:39.040 10485.760 - 10536.172: 3.6932% ( 23) 00:07:39.040 10536.172 - 10586.585: 3.9008% ( 19) 00:07:39.040 10586.585 - 10636.997: 4.0538% ( 14) 00:07:39.040 10636.997 - 10687.409: 4.1958% ( 13) 00:07:39.040 10687.409 - 10737.822: 4.4580% ( 24) 00:07:39.040 10737.822 - 10788.234: 4.7421% ( 26) 00:07:39.040 10788.234 - 10838.646: 5.0153% ( 25) 00:07:39.040 10838.646 - 10889.058: 5.4305% ( 38) 00:07:39.040 10889.058 - 10939.471: 5.8239% ( 36) 00:07:39.040 10939.471 - 10989.883: 6.2172% ( 36) 00:07:39.040 10989.883 - 11040.295: 6.7417% ( 48) 00:07:39.040 11040.295 - 11090.708: 7.3754% ( 58) 00:07:39.040 11090.708 - 11141.120: 7.8890% ( 47) 00:07:39.040 11141.120 - 11191.532: 8.2933% ( 37) 00:07:39.040 11191.532 - 11241.945: 8.7522% ( 42) 00:07:39.040 11241.945 - 11292.357: 9.2548% ( 46) 00:07:39.040 11292.357 - 11342.769: 9.9432% ( 63) 00:07:39.040 11342.769 - 11393.182: 10.7408% ( 73) 00:07:39.040 11393.182 - 11443.594: 11.5166% ( 71) 00:07:39.040 11443.594 - 11494.006: 12.3142% ( 73) 00:07:39.040 11494.006 - 11544.418: 13.1228% ( 74) 00:07:39.040 11544.418 - 11594.831: 13.8986% ( 71) 00:07:39.040 11594.831 - 11645.243: 14.5979% ( 64) 00:07:39.040 11645.243 - 11695.655: 15.3737% ( 71) 00:07:39.040 11695.655 - 11746.068: 16.1604% ( 72) 00:07:39.040 11746.068 - 11796.480: 16.7504% ( 54) 00:07:39.040 11796.480 - 11846.892: 17.5590% ( 74) 00:07:39.040 11846.892 - 11897.305: 18.3129% ( 69) 00:07:39.040 11897.305 - 11947.717: 19.0778% ( 70) 00:07:39.040 11947.717 - 11998.129: 19.6023% ( 48) 00:07:39.040 11998.129 - 12048.542: 20.1705% ( 52) 00:07:39.040 12048.542 - 12098.954: 20.8807% ( 65) 00:07:39.040 12098.954 - 12149.366: 21.7657% ( 81) 00:07:39.040 12149.366 - 12199.778: 22.3011% ( 49) 00:07:39.040 12199.778 - 12250.191: 22.9130% ( 56) 00:07:39.040 12250.191 - 12300.603: 23.6888% ( 71) 00:07:39.040 12300.603 - 12351.015: 24.8580% ( 107) 00:07:39.040 12351.015 - 12401.428: 25.8086% ( 87) 00:07:39.040 12401.428 - 12451.840: 26.8684% ( 97) 00:07:39.040 12451.840 - 12502.252: 27.7972% ( 85) 00:07:39.040 12502.252 - 12552.665: 28.7697% ( 89) 00:07:39.040 12552.665 - 12603.077: 29.6875% ( 84) 00:07:39.040 12603.077 - 12653.489: 30.5288% ( 77) 00:07:39.040 12653.489 - 12703.902: 31.3483% ( 75) 00:07:39.040 12703.902 - 12754.314: 32.2662% ( 84) 00:07:39.040 12754.314 - 12804.726: 33.3916% ( 103) 00:07:39.040 12804.726 - 12855.138: 34.3859% ( 91) 00:07:39.040 12855.138 - 12905.551: 35.3802% ( 91) 00:07:39.040 12905.551 - 13006.375: 37.3361% ( 179) 00:07:39.040 13006.375 - 13107.200: 39.7946% ( 225) 00:07:39.040 13107.200 - 13208.025: 42.2421% ( 224) 00:07:39.040 13208.025 - 13308.849: 44.2308% ( 182) 00:07:39.040 13308.849 - 13409.674: 46.5035% ( 208) 00:07:39.040 13409.674 - 13510.498: 48.6342% ( 195) 00:07:39.040 13510.498 - 13611.323: 50.1420% ( 138) 00:07:39.040 13611.323 - 13712.148: 52.2072% ( 189) 00:07:39.040 13712.148 - 13812.972: 53.8352% ( 149) 00:07:39.040 13812.972 - 13913.797: 55.5070% ( 153) 00:07:39.040 13913.797 - 14014.622: 57.2880% ( 163) 00:07:39.040 14014.622 - 14115.446: 58.9161% ( 149) 00:07:39.040 14115.446 - 14216.271: 60.4349% ( 139) 00:07:39.040 14216.271 - 14317.095: 62.2705% ( 168) 00:07:39.040 14317.095 - 14417.920: 63.8440% ( 144) 00:07:39.040 14417.920 - 14518.745: 65.1224% ( 117) 00:07:39.040 14518.745 - 14619.569: 66.3134% ( 109) 00:07:39.040 14619.569 - 14720.394: 67.5481% ( 113) 00:07:39.040 14720.394 - 14821.218: 69.0559% ( 138) 00:07:39.040 14821.218 - 14922.043: 70.3781% ( 121) 00:07:39.040 14922.043 - 15022.868: 71.7220% ( 123) 00:07:39.040 15022.868 - 15123.692: 72.9240% ( 110) 00:07:39.040 15123.692 - 15224.517: 74.0931% ( 107) 00:07:39.040 15224.517 - 15325.342: 75.4917% ( 128) 00:07:39.040 15325.342 - 15426.166: 76.8466% ( 124) 00:07:39.040 15426.166 - 15526.991: 78.3545% ( 138) 00:07:39.040 15526.991 - 15627.815: 79.8733% ( 139) 00:07:39.040 15627.815 - 15728.640: 81.4576% ( 145) 00:07:39.040 15728.640 - 15829.465: 82.8453% ( 127) 00:07:39.040 15829.465 - 15930.289: 84.0691% ( 112) 00:07:39.040 15930.289 - 16031.114: 85.4349% ( 125) 00:07:39.040 16031.114 - 16131.938: 86.5822% ( 105) 00:07:39.040 16131.938 - 16232.763: 87.6202% ( 95) 00:07:39.040 16232.763 - 16333.588: 88.5817% ( 88) 00:07:39.040 16333.588 - 16434.412: 89.6744% ( 100) 00:07:39.040 16434.412 - 16535.237: 90.8326% ( 106) 00:07:39.040 16535.237 - 16636.062: 91.5428% ( 65) 00:07:39.040 16636.062 - 16736.886: 92.1438% ( 55) 00:07:39.040 16736.886 - 16837.711: 92.7448% ( 55) 00:07:39.040 16837.711 - 16938.535: 93.3129% ( 52) 00:07:39.040 16938.535 - 17039.360: 93.9030% ( 54) 00:07:39.040 17039.360 - 17140.185: 94.4712% ( 52) 00:07:39.040 17140.185 - 17241.009: 94.9410% ( 43) 00:07:39.040 17241.009 - 17341.834: 95.4983% ( 51) 00:07:39.040 17341.834 - 17442.658: 95.9244% ( 39) 00:07:39.040 17442.658 - 17543.483: 96.1320% ( 19) 00:07:39.040 17543.483 - 17644.308: 96.3615% ( 21) 00:07:39.040 17644.308 - 17745.132: 96.5581% ( 18) 00:07:39.040 17745.132 - 17845.957: 96.6892% ( 12) 00:07:39.040 17845.957 - 17946.782: 96.8094% ( 11) 00:07:39.040 17946.782 - 18047.606: 96.9187% ( 10) 00:07:39.040 18047.606 - 18148.431: 97.0280% ( 10) 00:07:39.040 18148.431 - 18249.255: 97.1591% ( 12) 00:07:39.040 18249.255 - 18350.080: 97.2137% ( 5) 00:07:39.040 18350.080 - 18450.905: 97.3448% ( 12) 00:07:39.040 18450.905 - 18551.729: 97.3885% ( 4) 00:07:39.040 18551.729 - 18652.554: 97.4432% ( 5) 00:07:39.040 18652.554 - 18753.378: 97.5306% ( 8) 00:07:39.040 18753.378 - 18854.203: 97.5743% ( 4) 00:07:39.040 18854.203 - 18955.028: 97.5962% ( 2) 00:07:39.040 18955.028 - 19055.852: 97.6508% ( 5) 00:07:39.040 19055.852 - 19156.677: 97.6945% ( 4) 00:07:39.040 19156.677 - 19257.502: 97.7382% ( 4) 00:07:39.040 19257.502 - 19358.326: 97.7819% ( 4) 00:07:39.040 19358.326 - 19459.151: 97.8475% ( 6) 00:07:39.040 19459.151 - 19559.975: 97.8693% ( 2) 00:07:39.040 19559.975 - 19660.800: 97.9021% ( 3) 00:07:39.040 19963.274 - 20064.098: 97.9458% ( 4) 00:07:39.040 20064.098 - 20164.923: 97.9677% ( 2) 00:07:39.040 20164.923 - 20265.748: 97.9895% ( 2) 00:07:39.040 20265.748 - 20366.572: 98.0114% ( 2) 00:07:39.040 20366.572 - 20467.397: 98.0988% ( 8) 00:07:39.040 20467.397 - 20568.222: 98.2190% ( 11) 00:07:39.040 20568.222 - 20669.046: 98.2736% ( 5) 00:07:39.040 20669.046 - 20769.871: 98.2955% ( 2) 00:07:39.040 20769.871 - 20870.695: 98.3392% ( 4) 00:07:39.040 20971.520 - 21072.345: 98.3719% ( 3) 00:07:39.040 21072.345 - 21173.169: 98.4047% ( 3) 00:07:39.040 21173.169 - 21273.994: 98.4266% ( 2) 00:07:39.040 21273.994 - 21374.818: 98.5140% ( 8) 00:07:39.040 21374.818 - 21475.643: 98.5249% ( 1) 00:07:39.040 21475.643 - 21576.468: 98.5686% ( 4) 00:07:39.040 21576.468 - 21677.292: 98.6014% ( 3) 00:07:39.040 26819.348 - 27020.997: 98.6670% ( 6) 00:07:39.040 27020.997 - 27222.646: 98.7216% ( 5) 00:07:39.040 27222.646 - 27424.295: 98.8090% ( 8) 00:07:39.040 27424.295 - 27625.945: 98.8855% ( 7) 00:07:39.040 27625.945 - 27827.594: 98.9729% ( 8) 00:07:39.040 27827.594 - 28029.243: 99.0494% ( 7) 00:07:39.040 28029.243 - 28230.892: 99.1368% ( 8) 00:07:39.040 28230.892 - 28432.542: 99.2133% ( 7) 00:07:39.040 28432.542 - 28634.191: 99.2898% ( 7) 00:07:39.040 28634.191 - 28835.840: 99.3007% ( 1) 00:07:39.040 35086.966 - 35288.615: 99.3772% ( 7) 00:07:39.040 35288.615 - 35490.265: 99.4646% ( 8) 00:07:39.040 35490.265 - 35691.914: 99.5411% ( 7) 00:07:39.040 35691.914 - 35893.563: 99.6285% ( 8) 00:07:39.040 35893.563 - 36095.212: 99.6722% ( 4) 00:07:39.040 36095.212 - 36296.862: 99.8142% ( 13) 00:07:39.040 36296.862 - 36498.511: 99.8689% ( 5) 00:07:39.040 36498.511 - 36700.160: 99.9454% ( 7) 00:07:39.040 36700.160 - 36901.809: 100.0000% ( 5) 00:07:39.040 00:07:39.040 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.040 ============================================================================== 00:07:39.040 Range in us Cumulative IO count 00:07:39.040 9175.040 - 9225.452: 0.0109% ( 1) 00:07:39.040 9275.865 - 9326.277: 0.0328% ( 2) 00:07:39.040 9326.277 - 9376.689: 0.0765% ( 4) 00:07:39.040 9376.689 - 9427.102: 0.1202% ( 4) 00:07:39.040 9427.102 - 9477.514: 0.1748% ( 5) 00:07:39.040 9477.514 - 9527.926: 0.2295% ( 5) 00:07:39.040 9527.926 - 9578.338: 0.3169% ( 8) 00:07:39.040 9578.338 - 9628.751: 0.4698% ( 14) 00:07:39.040 9628.751 - 9679.163: 0.6010% ( 12) 00:07:39.040 9679.163 - 9729.575: 0.7102% ( 10) 00:07:39.040 9729.575 - 9779.988: 0.8195% ( 10) 00:07:39.040 9779.988 - 9830.400: 1.0490% ( 21) 00:07:39.040 9830.400 - 9880.812: 1.1691% ( 11) 00:07:39.040 9880.812 - 9931.225: 1.2238% ( 5) 00:07:39.040 9931.225 - 9981.637: 1.2784% ( 5) 00:07:39.040 9981.637 - 10032.049: 1.3221% ( 4) 00:07:39.040 10032.049 - 10082.462: 1.3549% ( 3) 00:07:39.040 10082.462 - 10132.874: 1.3877% ( 3) 00:07:39.040 10132.874 - 10183.286: 1.4095% ( 2) 00:07:39.040 10183.286 - 10233.698: 1.4532% ( 4) 00:07:39.040 10233.698 - 10284.111: 1.5188% ( 6) 00:07:39.040 10284.111 - 10334.523: 1.6171% ( 9) 00:07:39.040 10334.523 - 10384.935: 1.7264% ( 10) 00:07:39.040 10384.935 - 10435.348: 1.9122% ( 17) 00:07:39.040 10435.348 - 10485.760: 2.1525% ( 22) 00:07:39.040 10485.760 - 10536.172: 2.5896% ( 40) 00:07:39.040 10536.172 - 10586.585: 2.9720% ( 35) 00:07:39.041 10586.585 - 10636.997: 3.4747% ( 46) 00:07:39.041 10636.997 - 10687.409: 4.0210% ( 50) 00:07:39.041 10687.409 - 10737.822: 4.5127% ( 45) 00:07:39.041 10737.822 - 10788.234: 4.8295% ( 29) 00:07:39.041 10788.234 - 10838.646: 5.1355% ( 28) 00:07:39.041 10838.646 - 10889.058: 5.3977% ( 24) 00:07:39.041 10889.058 - 10939.471: 5.6709% ( 25) 00:07:39.041 10939.471 - 10989.883: 6.1298% ( 42) 00:07:39.041 10989.883 - 11040.295: 6.5559% ( 39) 00:07:39.041 11040.295 - 11090.708: 6.9712% ( 38) 00:07:39.041 11090.708 - 11141.120: 7.3536% ( 35) 00:07:39.041 11141.120 - 11191.532: 7.8781% ( 48) 00:07:39.041 11191.532 - 11241.945: 8.8177% ( 86) 00:07:39.041 11241.945 - 11292.357: 9.5498% ( 67) 00:07:39.041 11292.357 - 11342.769: 10.2928% ( 68) 00:07:39.041 11342.769 - 11393.182: 11.4183% ( 103) 00:07:39.041 11393.182 - 11443.594: 12.4235% ( 92) 00:07:39.041 11443.594 - 11494.006: 13.3523% ( 85) 00:07:39.041 11494.006 - 11544.418: 14.0844% ( 67) 00:07:39.041 11544.418 - 11594.831: 14.7727% ( 63) 00:07:39.041 11594.831 - 11645.243: 15.4392% ( 61) 00:07:39.041 11645.243 - 11695.655: 16.0948% ( 60) 00:07:39.041 11695.655 - 11746.068: 16.6412% ( 50) 00:07:39.041 11746.068 - 11796.480: 17.3077% ( 61) 00:07:39.041 11796.480 - 11846.892: 17.8649% ( 51) 00:07:39.041 11846.892 - 11897.305: 18.5315% ( 61) 00:07:39.041 11897.305 - 11947.717: 19.2526% ( 66) 00:07:39.041 11947.717 - 11998.129: 19.9410% ( 63) 00:07:39.041 11998.129 - 12048.542: 20.5310% ( 54) 00:07:39.041 12048.542 - 12098.954: 21.2522% ( 66) 00:07:39.041 12098.954 - 12149.366: 21.9515% ( 64) 00:07:39.041 12149.366 - 12199.778: 22.5415% ( 54) 00:07:39.041 12199.778 - 12250.191: 23.1643% ( 57) 00:07:39.041 12250.191 - 12300.603: 23.8199% ( 60) 00:07:39.041 12300.603 - 12351.015: 24.7596% ( 86) 00:07:39.041 12351.015 - 12401.428: 25.3606% ( 55) 00:07:39.041 12401.428 - 12451.840: 25.9725% ( 56) 00:07:39.041 12451.840 - 12502.252: 26.7264% ( 69) 00:07:39.041 12502.252 - 12552.665: 27.6661% ( 86) 00:07:39.041 12552.665 - 12603.077: 28.5621% ( 82) 00:07:39.041 12603.077 - 12653.489: 29.5673% ( 92) 00:07:39.041 12653.489 - 12703.902: 30.6381% ( 98) 00:07:39.041 12703.902 - 12754.314: 31.8182% ( 108) 00:07:39.041 12754.314 - 12804.726: 33.1731% ( 124) 00:07:39.041 12804.726 - 12855.138: 34.3641% ( 109) 00:07:39.041 12855.138 - 12905.551: 35.8064% ( 132) 00:07:39.041 12905.551 - 13006.375: 38.5927% ( 255) 00:07:39.041 13006.375 - 13107.200: 40.7670% ( 199) 00:07:39.041 13107.200 - 13208.025: 43.1272% ( 216) 00:07:39.041 13208.025 - 13308.849: 45.0940% ( 180) 00:07:39.041 13308.849 - 13409.674: 47.2793% ( 200) 00:07:39.041 13409.674 - 13510.498: 49.5739% ( 210) 00:07:39.041 13510.498 - 13611.323: 51.8684% ( 210) 00:07:39.041 13611.323 - 13712.148: 53.5621% ( 155) 00:07:39.041 13712.148 - 13812.972: 55.1355% ( 144) 00:07:39.041 13812.972 - 13913.797: 56.6215% ( 136) 00:07:39.041 13913.797 - 14014.622: 58.0420% ( 130) 00:07:39.041 14014.622 - 14115.446: 59.5498% ( 138) 00:07:39.041 14115.446 - 14216.271: 61.0686% ( 139) 00:07:39.041 14216.271 - 14317.095: 62.5437% ( 135) 00:07:39.041 14317.095 - 14417.920: 64.2483% ( 156) 00:07:39.041 14417.920 - 14518.745: 65.8872% ( 150) 00:07:39.041 14518.745 - 14619.569: 67.6246% ( 159) 00:07:39.041 14619.569 - 14720.394: 69.3291% ( 156) 00:07:39.041 14720.394 - 14821.218: 70.6294% ( 119) 00:07:39.041 14821.218 - 14922.043: 71.7439% ( 102) 00:07:39.041 14922.043 - 15022.868: 73.0004% ( 115) 00:07:39.041 15022.868 - 15123.692: 74.0385% ( 95) 00:07:39.041 15123.692 - 15224.517: 74.9891% ( 87) 00:07:39.041 15224.517 - 15325.342: 75.8851% ( 82) 00:07:39.041 15325.342 - 15426.166: 76.7701% ( 81) 00:07:39.041 15426.166 - 15526.991: 77.7426% ( 89) 00:07:39.041 15526.991 - 15627.815: 78.8134% ( 98) 00:07:39.041 15627.815 - 15728.640: 80.0262% ( 111) 00:07:39.041 15728.640 - 15829.465: 81.5122% ( 136) 00:07:39.041 15829.465 - 15930.289: 82.7360% ( 112) 00:07:39.041 15930.289 - 16031.114: 84.2657% ( 140) 00:07:39.041 16031.114 - 16131.938: 85.4895% ( 112) 00:07:39.041 16131.938 - 16232.763: 86.9209% ( 131) 00:07:39.041 16232.763 - 16333.588: 88.2321% ( 120) 00:07:39.041 16333.588 - 16434.412: 89.3357% ( 101) 00:07:39.041 16434.412 - 16535.237: 90.1989% ( 79) 00:07:39.041 16535.237 - 16636.062: 90.9747% ( 71) 00:07:39.041 16636.062 - 16736.886: 91.5756% ( 55) 00:07:39.041 16736.886 - 16837.711: 92.2203% ( 59) 00:07:39.041 16837.711 - 16938.535: 92.7775% ( 51) 00:07:39.041 16938.535 - 17039.360: 93.2037% ( 39) 00:07:39.041 17039.360 - 17140.185: 93.7063% ( 46) 00:07:39.041 17140.185 - 17241.009: 94.1761% ( 43) 00:07:39.041 17241.009 - 17341.834: 94.7006% ( 48) 00:07:39.041 17341.834 - 17442.658: 95.2906% ( 54) 00:07:39.041 17442.658 - 17543.483: 95.7168% ( 39) 00:07:39.041 17543.483 - 17644.308: 96.0774% ( 33) 00:07:39.041 17644.308 - 17745.132: 96.4052% ( 30) 00:07:39.041 17745.132 - 17845.957: 96.6674% ( 24) 00:07:39.041 17845.957 - 17946.782: 96.9078% ( 22) 00:07:39.041 17946.782 - 18047.606: 97.0717% ( 15) 00:07:39.041 18047.606 - 18148.431: 97.2137% ( 13) 00:07:39.041 18148.431 - 18249.255: 97.3121% ( 9) 00:07:39.041 18249.255 - 18350.080: 97.4760% ( 15) 00:07:39.041 18350.080 - 18450.905: 97.6071% ( 12) 00:07:39.041 18450.905 - 18551.729: 97.7273% ( 11) 00:07:39.041 18551.729 - 18652.554: 97.8038% ( 7) 00:07:39.041 18652.554 - 18753.378: 97.8802% ( 7) 00:07:39.041 18753.378 - 18854.203: 97.9021% ( 2) 00:07:39.041 20769.871 - 20870.695: 97.9130% ( 1) 00:07:39.041 20870.695 - 20971.520: 97.9458% ( 3) 00:07:39.041 20971.520 - 21072.345: 98.0004% ( 5) 00:07:39.041 21072.345 - 21173.169: 98.0441% ( 4) 00:07:39.041 21173.169 - 21273.994: 98.0878% ( 4) 00:07:39.041 21273.994 - 21374.818: 98.1425% ( 5) 00:07:39.041 21374.818 - 21475.643: 98.1862% ( 4) 00:07:39.041 21475.643 - 21576.468: 98.2299% ( 4) 00:07:39.041 21576.468 - 21677.292: 98.2736% ( 4) 00:07:39.041 21677.292 - 21778.117: 98.3282% ( 5) 00:07:39.041 21778.117 - 21878.942: 98.3829% ( 5) 00:07:39.041 21878.942 - 21979.766: 98.4266% ( 4) 00:07:39.041 21979.766 - 22080.591: 98.4703% ( 4) 00:07:39.041 22080.591 - 22181.415: 98.5140% ( 4) 00:07:39.041 22181.415 - 22282.240: 98.5577% ( 4) 00:07:39.041 22282.240 - 22383.065: 98.6014% ( 4) 00:07:39.041 25811.102 - 26012.751: 98.6779% ( 7) 00:07:39.041 26012.751 - 26214.400: 98.7653% ( 8) 00:07:39.041 26214.400 - 26416.049: 98.8527% ( 8) 00:07:39.041 26416.049 - 26617.698: 98.9510% ( 9) 00:07:39.041 26617.698 - 26819.348: 99.0385% ( 8) 00:07:39.041 26819.348 - 27020.997: 99.1259% ( 8) 00:07:39.041 27020.997 - 27222.646: 99.2133% ( 8) 00:07:39.041 27222.646 - 27424.295: 99.3007% ( 8) 00:07:39.041 33272.123 - 33473.772: 99.3335% ( 3) 00:07:39.041 33473.772 - 33675.422: 99.4209% ( 8) 00:07:39.041 33675.422 - 33877.071: 99.5083% ( 8) 00:07:39.041 33877.071 - 34078.720: 99.5957% ( 8) 00:07:39.041 34078.720 - 34280.369: 99.6722% ( 7) 00:07:39.041 34280.369 - 34482.018: 99.7596% ( 8) 00:07:39.041 34482.018 - 34683.668: 99.8361% ( 7) 00:07:39.041 34683.668 - 34885.317: 99.9235% ( 8) 00:07:39.041 34885.317 - 35086.966: 100.0000% ( 7) 00:07:39.041 00:07:39.041 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.041 ============================================================================== 00:07:39.041 Range in us Cumulative IO count 00:07:39.041 9124.628 - 9175.040: 0.0109% ( 1) 00:07:39.041 9175.040 - 9225.452: 0.0656% ( 5) 00:07:39.041 9225.452 - 9275.865: 0.1202% ( 5) 00:07:39.041 9275.865 - 9326.277: 0.1967% ( 7) 00:07:39.041 9326.277 - 9376.689: 0.2841% ( 8) 00:07:39.041 9376.689 - 9427.102: 0.4043% ( 11) 00:07:39.041 9427.102 - 9477.514: 0.5026% ( 9) 00:07:39.041 9477.514 - 9527.926: 0.5682% ( 6) 00:07:39.041 9527.926 - 9578.338: 0.6774% ( 10) 00:07:39.041 9578.338 - 9628.751: 0.7758% ( 9) 00:07:39.041 9628.751 - 9679.163: 0.8960% ( 11) 00:07:39.041 9679.163 - 9729.575: 1.0599% ( 15) 00:07:39.041 9729.575 - 9779.988: 1.1473% ( 8) 00:07:39.041 9779.988 - 9830.400: 1.1910% ( 4) 00:07:39.041 9830.400 - 9880.812: 1.2675% ( 7) 00:07:39.041 9880.812 - 9931.225: 1.3221% ( 5) 00:07:39.041 9931.225 - 9981.637: 1.3877% ( 6) 00:07:39.041 9981.637 - 10032.049: 1.4642% ( 7) 00:07:39.041 10032.049 - 10082.462: 1.5406% ( 7) 00:07:39.041 10082.462 - 10132.874: 1.5734% ( 3) 00:07:39.041 10132.874 - 10183.286: 1.6390% ( 6) 00:07:39.041 10183.286 - 10233.698: 1.8247% ( 17) 00:07:39.041 10233.698 - 10284.111: 2.1088% ( 26) 00:07:39.041 10284.111 - 10334.523: 2.4476% ( 31) 00:07:39.041 10334.523 - 10384.935: 2.8300% ( 35) 00:07:39.041 10384.935 - 10435.348: 3.1687% ( 31) 00:07:39.041 10435.348 - 10485.760: 3.6167% ( 41) 00:07:39.041 10485.760 - 10536.172: 3.9663% ( 32) 00:07:39.041 10536.172 - 10586.585: 4.3160% ( 32) 00:07:39.041 10586.585 - 10636.997: 4.7858% ( 43) 00:07:39.041 10636.997 - 10687.409: 5.1683% ( 35) 00:07:39.041 10687.409 - 10737.822: 5.5944% ( 39) 00:07:39.041 10737.822 - 10788.234: 6.1189% ( 48) 00:07:39.041 10788.234 - 10838.646: 6.4685% ( 32) 00:07:39.041 10838.646 - 10889.058: 6.8073% ( 31) 00:07:39.041 10889.058 - 10939.471: 7.0913% ( 26) 00:07:39.041 10939.471 - 10989.883: 7.5175% ( 39) 00:07:39.041 10989.883 - 11040.295: 7.8890% ( 34) 00:07:39.042 11040.295 - 11090.708: 8.2933% ( 37) 00:07:39.042 11090.708 - 11141.120: 8.7522% ( 42) 00:07:39.042 11141.120 - 11191.532: 9.0909% ( 31) 00:07:39.042 11191.532 - 11241.945: 9.4515% ( 33) 00:07:39.042 11241.945 - 11292.357: 9.9760% ( 48) 00:07:39.042 11292.357 - 11342.769: 10.7408% ( 70) 00:07:39.042 11342.769 - 11393.182: 11.4183% ( 62) 00:07:39.042 11393.182 - 11443.594: 12.1394% ( 66) 00:07:39.042 11443.594 - 11494.006: 12.8387% ( 64) 00:07:39.042 11494.006 - 11544.418: 13.3086% ( 43) 00:07:39.042 11544.418 - 11594.831: 13.8221% ( 47) 00:07:39.042 11594.831 - 11645.243: 14.1608% ( 31) 00:07:39.042 11645.243 - 11695.655: 14.5651% ( 37) 00:07:39.042 11695.655 - 11746.068: 14.9913% ( 39) 00:07:39.042 11746.068 - 11796.480: 15.5157% ( 48) 00:07:39.042 11796.480 - 11846.892: 15.9965% ( 44) 00:07:39.042 11846.892 - 11897.305: 16.7504% ( 69) 00:07:39.042 11897.305 - 11947.717: 17.4607% ( 65) 00:07:39.042 11947.717 - 11998.129: 18.3566% ( 82) 00:07:39.042 11998.129 - 12048.542: 19.2089% ( 78) 00:07:39.042 12048.542 - 12098.954: 19.9410% ( 67) 00:07:39.042 12098.954 - 12149.366: 21.1538% ( 111) 00:07:39.042 12149.366 - 12199.778: 22.1482% ( 91) 00:07:39.042 12199.778 - 12250.191: 23.3829% ( 113) 00:07:39.042 12250.191 - 12300.603: 24.4755% ( 100) 00:07:39.042 12300.603 - 12351.015: 25.4261% ( 87) 00:07:39.042 12351.015 - 12401.428: 26.3767% ( 87) 00:07:39.042 12401.428 - 12451.840: 27.3383% ( 88) 00:07:39.042 12451.840 - 12502.252: 28.3326% ( 91) 00:07:39.042 12502.252 - 12552.665: 29.5673% ( 113) 00:07:39.042 12552.665 - 12603.077: 30.5507% ( 90) 00:07:39.042 12603.077 - 12653.489: 31.3702% ( 75) 00:07:39.042 12653.489 - 12703.902: 32.3427% ( 89) 00:07:39.042 12703.902 - 12754.314: 33.3807% ( 95) 00:07:39.042 12754.314 - 12804.726: 34.6372% ( 115) 00:07:39.042 12804.726 - 12855.138: 35.6753% ( 95) 00:07:39.042 12855.138 - 12905.551: 36.5931% ( 84) 00:07:39.042 12905.551 - 13006.375: 38.4834% ( 173) 00:07:39.042 13006.375 - 13107.200: 40.6687% ( 200) 00:07:39.042 13107.200 - 13208.025: 42.4716% ( 165) 00:07:39.042 13208.025 - 13308.849: 44.4165% ( 178) 00:07:39.042 13308.849 - 13409.674: 46.2740% ( 170) 00:07:39.042 13409.674 - 13510.498: 48.0769% ( 165) 00:07:39.042 13510.498 - 13611.323: 49.9781% ( 174) 00:07:39.042 13611.323 - 13712.148: 52.2399% ( 207) 00:07:39.042 13712.148 - 13812.972: 54.1302% ( 173) 00:07:39.042 13812.972 - 13913.797: 56.0205% ( 173) 00:07:39.042 13913.797 - 14014.622: 57.9218% ( 174) 00:07:39.042 14014.622 - 14115.446: 59.6045% ( 154) 00:07:39.042 14115.446 - 14216.271: 61.4401% ( 168) 00:07:39.042 14216.271 - 14317.095: 62.8278% ( 127) 00:07:39.042 14317.095 - 14417.920: 64.0188% ( 109) 00:07:39.042 14417.920 - 14518.745: 65.2863% ( 116) 00:07:39.042 14518.745 - 14619.569: 66.6630% ( 126) 00:07:39.042 14619.569 - 14720.394: 68.1272% ( 134) 00:07:39.042 14720.394 - 14821.218: 69.6788% ( 142) 00:07:39.042 14821.218 - 14922.043: 71.5363% ( 170) 00:07:39.042 14922.043 - 15022.868: 73.5468% ( 184) 00:07:39.042 15022.868 - 15123.692: 75.2732% ( 158) 00:07:39.042 15123.692 - 15224.517: 76.8029% ( 140) 00:07:39.042 15224.517 - 15325.342: 77.6989% ( 82) 00:07:39.042 15325.342 - 15426.166: 78.5402% ( 77) 00:07:39.042 15426.166 - 15526.991: 79.4362% ( 82) 00:07:39.042 15526.991 - 15627.815: 80.4414% ( 92) 00:07:39.042 15627.815 - 15728.640: 81.4248% ( 90) 00:07:39.042 15728.640 - 15829.465: 82.4956% ( 98) 00:07:39.042 15829.465 - 15930.289: 83.5664% ( 98) 00:07:39.042 15930.289 - 16031.114: 84.8667% ( 119) 00:07:39.042 16031.114 - 16131.938: 86.2107% ( 123) 00:07:39.042 16131.938 - 16232.763: 87.5328% ( 121) 00:07:39.042 16232.763 - 16333.588: 88.6364% ( 101) 00:07:39.042 16333.588 - 16434.412: 89.6962% ( 97) 00:07:39.042 16434.412 - 16535.237: 90.5813% ( 81) 00:07:39.042 16535.237 - 16636.062: 91.3571% ( 71) 00:07:39.042 16636.062 - 16736.886: 92.0127% ( 60) 00:07:39.042 16736.886 - 16837.711: 92.5918% ( 53) 00:07:39.042 16837.711 - 16938.535: 93.1163% ( 48) 00:07:39.042 16938.535 - 17039.360: 93.6298% ( 47) 00:07:39.042 17039.360 - 17140.185: 94.1106% ( 44) 00:07:39.042 17140.185 - 17241.009: 94.3728% ( 24) 00:07:39.042 17241.009 - 17341.834: 94.5476% ( 16) 00:07:39.042 17341.834 - 17442.658: 94.7771% ( 21) 00:07:39.042 17442.658 - 17543.483: 95.0175% ( 22) 00:07:39.042 17543.483 - 17644.308: 95.3562% ( 31) 00:07:39.042 17644.308 - 17745.132: 95.6294% ( 25) 00:07:39.042 17745.132 - 17845.957: 96.1429% ( 47) 00:07:39.042 17845.957 - 17946.782: 96.4052% ( 24) 00:07:39.042 17946.782 - 18047.606: 96.6346% ( 21) 00:07:39.042 18047.606 - 18148.431: 96.8422% ( 19) 00:07:39.042 18148.431 - 18249.255: 97.0170% ( 16) 00:07:39.042 18249.255 - 18350.080: 97.1809% ( 15) 00:07:39.042 18350.080 - 18450.905: 97.3339% ( 14) 00:07:39.042 18450.905 - 18551.729: 97.5415% ( 19) 00:07:39.042 18551.729 - 18652.554: 97.6508% ( 10) 00:07:39.042 18652.554 - 18753.378: 97.7163% ( 6) 00:07:39.042 18753.378 - 18854.203: 97.7601% ( 4) 00:07:39.042 18854.203 - 18955.028: 97.8038% ( 4) 00:07:39.042 18955.028 - 19055.852: 97.8365% ( 3) 00:07:39.042 19055.852 - 19156.677: 97.8802% ( 4) 00:07:39.042 19156.677 - 19257.502: 97.9021% ( 2) 00:07:39.042 20971.520 - 21072.345: 97.9130% ( 1) 00:07:39.042 21072.345 - 21173.169: 97.9567% ( 4) 00:07:39.042 21173.169 - 21273.994: 98.0114% ( 5) 00:07:39.042 21273.994 - 21374.818: 98.0660% ( 5) 00:07:39.042 21374.818 - 21475.643: 98.1097% ( 4) 00:07:39.042 21475.643 - 21576.468: 98.1643% ( 5) 00:07:39.042 21576.468 - 21677.292: 98.2190% ( 5) 00:07:39.042 21677.292 - 21778.117: 98.2627% ( 4) 00:07:39.042 21778.117 - 21878.942: 98.3064% ( 4) 00:07:39.042 21878.942 - 21979.766: 98.3501% ( 4) 00:07:39.042 21979.766 - 22080.591: 98.3938% ( 4) 00:07:39.042 22080.591 - 22181.415: 98.4375% ( 4) 00:07:39.042 22181.415 - 22282.240: 98.4812% ( 4) 00:07:39.042 22282.240 - 22383.065: 98.5358% ( 5) 00:07:39.042 22383.065 - 22483.889: 98.5905% ( 5) 00:07:39.042 22483.889 - 22584.714: 98.6014% ( 1) 00:07:39.042 25306.978 - 25407.803: 98.6123% ( 1) 00:07:39.042 25407.803 - 25508.628: 98.6342% ( 2) 00:07:39.042 25508.628 - 25609.452: 98.6670% ( 3) 00:07:39.042 25609.452 - 25710.277: 98.6997% ( 3) 00:07:39.042 25710.277 - 25811.102: 98.7434% ( 4) 00:07:39.042 25811.102 - 26012.751: 98.8418% ( 9) 00:07:39.042 26012.751 - 26214.400: 98.9292% ( 8) 00:07:39.042 26214.400 - 26416.049: 99.0166% ( 8) 00:07:39.042 26416.049 - 26617.698: 99.1040% ( 8) 00:07:39.042 26617.698 - 26819.348: 99.1914% ( 8) 00:07:39.042 26819.348 - 27020.997: 99.2788% ( 8) 00:07:39.042 27020.997 - 27222.646: 99.3007% ( 2) 00:07:39.042 32868.825 - 33070.474: 99.3335% ( 3) 00:07:39.042 33070.474 - 33272.123: 99.4209% ( 8) 00:07:39.042 33272.123 - 33473.772: 99.5083% ( 8) 00:07:39.042 33473.772 - 33675.422: 99.6066% ( 9) 00:07:39.042 33675.422 - 33877.071: 99.6831% ( 7) 00:07:39.042 33877.071 - 34078.720: 99.7705% ( 8) 00:07:39.042 34078.720 - 34280.369: 99.8580% ( 8) 00:07:39.042 34280.369 - 34482.018: 99.9563% ( 9) 00:07:39.042 34482.018 - 34683.668: 100.0000% ( 4) 00:07:39.042 00:07:39.042 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.042 ============================================================================== 00:07:39.042 Range in us Cumulative IO count 00:07:39.042 9023.803 - 9074.215: 0.0109% ( 1) 00:07:39.042 9074.215 - 9124.628: 0.0546% ( 4) 00:07:39.042 9124.628 - 9175.040: 0.0874% ( 3) 00:07:39.042 9175.040 - 9225.452: 0.1202% ( 3) 00:07:39.042 9225.452 - 9275.865: 0.1530% ( 3) 00:07:39.042 9275.865 - 9326.277: 0.2404% ( 8) 00:07:39.042 9326.277 - 9376.689: 0.3497% ( 10) 00:07:39.042 9376.689 - 9427.102: 0.4589% ( 10) 00:07:39.042 9427.102 - 9477.514: 0.5245% ( 6) 00:07:39.042 9477.514 - 9527.926: 0.5573% ( 3) 00:07:39.042 9527.926 - 9578.338: 0.5900% ( 3) 00:07:39.042 9578.338 - 9628.751: 0.6337% ( 4) 00:07:39.042 9628.751 - 9679.163: 0.6993% ( 6) 00:07:39.042 9679.163 - 9729.575: 0.7649% ( 6) 00:07:39.042 9729.575 - 9779.988: 0.8741% ( 10) 00:07:39.042 9779.988 - 9830.400: 0.9506% ( 7) 00:07:39.042 9830.400 - 9880.812: 1.0927% ( 13) 00:07:39.042 9880.812 - 9931.225: 1.2566% ( 15) 00:07:39.042 9931.225 - 9981.637: 1.4314% ( 16) 00:07:39.042 9981.637 - 10032.049: 1.5625% ( 12) 00:07:39.042 10032.049 - 10082.462: 1.6608% ( 9) 00:07:39.042 10082.462 - 10132.874: 1.7920% ( 12) 00:07:39.042 10132.874 - 10183.286: 1.9886% ( 18) 00:07:39.042 10183.286 - 10233.698: 2.2727% ( 26) 00:07:39.042 10233.698 - 10284.111: 2.6552% ( 35) 00:07:39.042 10284.111 - 10334.523: 3.0594% ( 37) 00:07:39.042 10334.523 - 10384.935: 3.4309% ( 34) 00:07:39.042 10384.935 - 10435.348: 3.8899% ( 42) 00:07:39.042 10435.348 - 10485.760: 4.3925% ( 46) 00:07:39.042 10485.760 - 10536.172: 4.9279% ( 49) 00:07:39.042 10536.172 - 10586.585: 5.3431% ( 38) 00:07:39.042 10586.585 - 10636.997: 5.8020% ( 42) 00:07:39.042 10636.997 - 10687.409: 6.0970% ( 27) 00:07:39.042 10687.409 - 10737.822: 6.4030% ( 28) 00:07:39.042 10737.822 - 10788.234: 6.7635% ( 33) 00:07:39.042 10788.234 - 10838.646: 7.0586% ( 27) 00:07:39.042 10838.646 - 10889.058: 7.4956% ( 40) 00:07:39.042 10889.058 - 10939.471: 8.1075% ( 56) 00:07:39.042 10939.471 - 10989.883: 8.7413% ( 58) 00:07:39.042 10989.883 - 11040.295: 9.3204% ( 53) 00:07:39.042 11040.295 - 11090.708: 9.7574% ( 40) 00:07:39.042 11090.708 - 11141.120: 10.3802% ( 57) 00:07:39.042 11141.120 - 11191.532: 10.8282% ( 41) 00:07:39.043 11191.532 - 11241.945: 11.2872% ( 42) 00:07:39.043 11241.945 - 11292.357: 11.8007% ( 47) 00:07:39.043 11292.357 - 11342.769: 12.2050% ( 37) 00:07:39.043 11342.769 - 11393.182: 12.6093% ( 37) 00:07:39.043 11393.182 - 11443.594: 12.9261% ( 29) 00:07:39.043 11443.594 - 11494.006: 13.2758% ( 32) 00:07:39.043 11494.006 - 11544.418: 13.6254% ( 32) 00:07:39.043 11544.418 - 11594.831: 14.0625% ( 40) 00:07:39.043 11594.831 - 11645.243: 14.6635% ( 55) 00:07:39.043 11645.243 - 11695.655: 15.1552% ( 45) 00:07:39.043 11695.655 - 11746.068: 15.6578% ( 46) 00:07:39.043 11746.068 - 11796.480: 16.2260% ( 52) 00:07:39.043 11796.480 - 11846.892: 16.7067% ( 44) 00:07:39.043 11846.892 - 11897.305: 17.4825% ( 71) 00:07:39.043 11897.305 - 11947.717: 18.1709% ( 63) 00:07:39.043 11947.717 - 11998.129: 18.9358% ( 70) 00:07:39.043 11998.129 - 12048.542: 19.6351% ( 64) 00:07:39.043 12048.542 - 12098.954: 20.4218% ( 72) 00:07:39.043 12098.954 - 12149.366: 21.4379% ( 93) 00:07:39.043 12149.366 - 12199.778: 22.3885% ( 87) 00:07:39.043 12199.778 - 12250.191: 23.2299% ( 77) 00:07:39.043 12250.191 - 12300.603: 24.2461% ( 93) 00:07:39.043 12300.603 - 12351.015: 25.2732% ( 94) 00:07:39.043 12351.015 - 12401.428: 26.2347% ( 88) 00:07:39.043 12401.428 - 12451.840: 27.4257% ( 109) 00:07:39.043 12451.840 - 12502.252: 28.2998% ( 80) 00:07:39.043 12502.252 - 12552.665: 29.0756% ( 71) 00:07:39.043 12552.665 - 12603.077: 30.0044% ( 85) 00:07:39.043 12603.077 - 12653.489: 30.9441% ( 86) 00:07:39.043 12653.489 - 12703.902: 31.8400% ( 82) 00:07:39.043 12703.902 - 12754.314: 32.7579% ( 84) 00:07:39.043 12754.314 - 12804.726: 33.8068% ( 96) 00:07:39.043 12804.726 - 12855.138: 34.7356% ( 85) 00:07:39.043 12855.138 - 12905.551: 35.6316% ( 82) 00:07:39.043 12905.551 - 13006.375: 37.4781% ( 169) 00:07:39.043 13006.375 - 13107.200: 39.6088% ( 195) 00:07:39.043 13107.200 - 13208.025: 41.0184% ( 129) 00:07:39.043 13208.025 - 13308.849: 42.6683% ( 151) 00:07:39.043 13308.849 - 13409.674: 44.7006% ( 186) 00:07:39.043 13409.674 - 13510.498: 46.9843% ( 209) 00:07:39.043 13510.498 - 13611.323: 49.3226% ( 214) 00:07:39.043 13611.323 - 13712.148: 51.4860% ( 198) 00:07:39.043 13712.148 - 13812.972: 53.7260% ( 205) 00:07:39.043 13812.972 - 13913.797: 55.7255% ( 183) 00:07:39.043 13913.797 - 14014.622: 57.6814% ( 179) 00:07:39.043 14014.622 - 14115.446: 59.5935% ( 175) 00:07:39.043 14115.446 - 14216.271: 61.5494% ( 179) 00:07:39.043 14216.271 - 14317.095: 63.3195% ( 162) 00:07:39.043 14317.095 - 14417.920: 64.9038% ( 145) 00:07:39.043 14417.920 - 14518.745: 66.4226% ( 139) 00:07:39.043 14518.745 - 14619.569: 67.6464% ( 112) 00:07:39.043 14619.569 - 14720.394: 68.9467% ( 119) 00:07:39.043 14720.394 - 14821.218: 69.8427% ( 82) 00:07:39.043 14821.218 - 14922.043: 70.9572% ( 102) 00:07:39.043 14922.043 - 15022.868: 71.9406% ( 90) 00:07:39.043 15022.868 - 15123.692: 72.9677% ( 94) 00:07:39.043 15123.692 - 15224.517: 74.5739% ( 147) 00:07:39.043 15224.517 - 15325.342: 76.2893% ( 157) 00:07:39.043 15325.342 - 15426.166: 77.9174% ( 149) 00:07:39.043 15426.166 - 15526.991: 79.4253% ( 138) 00:07:39.043 15526.991 - 15627.815: 80.9222% ( 137) 00:07:39.043 15627.815 - 15728.640: 82.2880% ( 125) 00:07:39.043 15728.640 - 15829.465: 83.6648% ( 126) 00:07:39.043 15829.465 - 15930.289: 84.9650% ( 119) 00:07:39.043 15930.289 - 16031.114: 85.8610% ( 82) 00:07:39.043 16031.114 - 16131.938: 86.6914% ( 76) 00:07:39.043 16131.938 - 16232.763: 87.5765% ( 81) 00:07:39.043 16232.763 - 16333.588: 88.4615% ( 81) 00:07:39.043 16333.588 - 16434.412: 89.4231% ( 88) 00:07:39.043 16434.412 - 16535.237: 90.4283% ( 92) 00:07:39.043 16535.237 - 16636.062: 91.1932% ( 70) 00:07:39.043 16636.062 - 16736.886: 92.0673% ( 80) 00:07:39.043 16736.886 - 16837.711: 92.8322% ( 70) 00:07:39.043 16837.711 - 16938.535: 93.3348% ( 46) 00:07:39.043 16938.535 - 17039.360: 93.7281% ( 36) 00:07:39.043 17039.360 - 17140.185: 94.0559% ( 30) 00:07:39.043 17140.185 - 17241.009: 94.4384% ( 35) 00:07:39.043 17241.009 - 17341.834: 94.7334% ( 27) 00:07:39.043 17341.834 - 17442.658: 95.0503% ( 29) 00:07:39.043 17442.658 - 17543.483: 95.3234% ( 25) 00:07:39.043 17543.483 - 17644.308: 95.5092% ( 17) 00:07:39.043 17644.308 - 17745.132: 95.6622% ( 14) 00:07:39.043 17745.132 - 17845.957: 95.7714% ( 10) 00:07:39.043 17845.957 - 17946.782: 95.9135% ( 13) 00:07:39.043 17946.782 - 18047.606: 96.0774% ( 15) 00:07:39.043 18047.606 - 18148.431: 96.2522% ( 16) 00:07:39.043 18148.431 - 18249.255: 96.4052% ( 14) 00:07:39.043 18249.255 - 18350.080: 96.5363% ( 12) 00:07:39.043 18350.080 - 18450.905: 96.6783% ( 13) 00:07:39.043 18450.905 - 18551.729: 96.8094% ( 12) 00:07:39.043 18551.729 - 18652.554: 96.9296% ( 11) 00:07:39.043 18652.554 - 18753.378: 97.0498% ( 11) 00:07:39.043 18753.378 - 18854.203: 97.2574% ( 19) 00:07:39.043 18854.203 - 18955.028: 97.3885% ( 12) 00:07:39.043 18955.028 - 19055.852: 97.5087% ( 11) 00:07:39.043 19055.852 - 19156.677: 97.6945% ( 17) 00:07:39.043 19156.677 - 19257.502: 97.7491% ( 5) 00:07:39.043 19257.502 - 19358.326: 97.8147% ( 6) 00:07:39.043 19358.326 - 19459.151: 97.8912% ( 7) 00:07:39.043 19459.151 - 19559.975: 97.9021% ( 1) 00:07:39.043 20265.748 - 20366.572: 97.9130% ( 1) 00:07:39.043 20366.572 - 20467.397: 97.9895% ( 7) 00:07:39.043 20467.397 - 20568.222: 98.0332% ( 4) 00:07:39.043 20568.222 - 20669.046: 98.0769% ( 4) 00:07:39.043 20669.046 - 20769.871: 98.1206% ( 4) 00:07:39.043 20769.871 - 20870.695: 98.1643% ( 4) 00:07:39.043 20870.695 - 20971.520: 98.2080% ( 4) 00:07:39.043 20971.520 - 21072.345: 98.2517% ( 4) 00:07:39.043 21072.345 - 21173.169: 98.2955% ( 4) 00:07:39.043 21173.169 - 21273.994: 98.3392% ( 4) 00:07:39.043 21273.994 - 21374.818: 98.3938% ( 5) 00:07:39.043 21374.818 - 21475.643: 98.4266% ( 3) 00:07:39.043 21475.643 - 21576.468: 98.4703% ( 4) 00:07:39.043 21576.468 - 21677.292: 98.5249% ( 5) 00:07:39.043 21677.292 - 21778.117: 98.5795% ( 5) 00:07:39.043 21778.117 - 21878.942: 98.6014% ( 2) 00:07:39.043 24097.083 - 24197.908: 98.6123% ( 1) 00:07:39.043 24197.908 - 24298.732: 98.6560% ( 4) 00:07:39.043 24298.732 - 24399.557: 98.6997% ( 4) 00:07:39.043 24399.557 - 24500.382: 98.7434% ( 4) 00:07:39.043 24500.382 - 24601.206: 98.7872% ( 4) 00:07:39.043 24601.206 - 24702.031: 98.8309% ( 4) 00:07:39.043 24702.031 - 24802.855: 98.8855% ( 5) 00:07:39.043 24802.855 - 24903.680: 98.9292% ( 4) 00:07:39.043 24903.680 - 25004.505: 98.9729% ( 4) 00:07:39.043 25004.505 - 25105.329: 99.0166% ( 4) 00:07:39.043 25105.329 - 25206.154: 99.0603% ( 4) 00:07:39.043 25206.154 - 25306.978: 99.1040% ( 4) 00:07:39.043 25306.978 - 25407.803: 99.1587% ( 5) 00:07:39.043 25407.803 - 25508.628: 99.2024% ( 4) 00:07:39.043 25508.628 - 25609.452: 99.2351% ( 3) 00:07:39.043 25609.452 - 25710.277: 99.2788% ( 4) 00:07:39.043 25710.277 - 25811.102: 99.3007% ( 2) 00:07:39.043 31860.578 - 32062.228: 99.3663% ( 6) 00:07:39.043 32062.228 - 32263.877: 99.4537% ( 8) 00:07:39.043 32263.877 - 32465.526: 99.5411% ( 8) 00:07:39.043 32465.526 - 32667.175: 99.6285% ( 8) 00:07:39.043 32667.175 - 32868.825: 99.7159% ( 8) 00:07:39.043 32868.825 - 33070.474: 99.8033% ( 8) 00:07:39.043 33070.474 - 33272.123: 99.8907% ( 8) 00:07:39.043 33272.123 - 33473.772: 99.9781% ( 8) 00:07:39.043 33473.772 - 33675.422: 100.0000% ( 2) 00:07:39.043 00:07:39.043 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.043 ============================================================================== 00:07:39.043 Range in us Cumulative IO count 00:07:39.043 9326.277 - 9376.689: 0.0651% ( 6) 00:07:39.043 9376.689 - 9427.102: 0.1302% ( 6) 00:07:39.043 9427.102 - 9477.514: 0.2279% ( 9) 00:07:39.043 9477.514 - 9527.926: 0.3038% ( 7) 00:07:39.043 9527.926 - 9578.338: 0.4123% ( 10) 00:07:39.043 9578.338 - 9628.751: 0.5642% ( 14) 00:07:39.043 9628.751 - 9679.163: 0.7053% ( 13) 00:07:39.043 9679.163 - 9729.575: 0.8789% ( 16) 00:07:39.043 9729.575 - 9779.988: 1.0417% ( 15) 00:07:39.043 9779.988 - 9830.400: 1.3672% ( 30) 00:07:39.043 9830.400 - 9880.812: 1.4757% ( 10) 00:07:39.044 9880.812 - 9931.225: 1.5951% ( 11) 00:07:39.044 9931.225 - 9981.637: 1.8012% ( 19) 00:07:39.044 9981.637 - 10032.049: 1.8663% ( 6) 00:07:39.044 10032.049 - 10082.462: 1.9314% ( 6) 00:07:39.044 10082.462 - 10132.874: 1.9965% ( 6) 00:07:39.044 10132.874 - 10183.286: 2.0291% ( 3) 00:07:39.044 10183.286 - 10233.698: 2.0725% ( 4) 00:07:39.044 10233.698 - 10284.111: 2.1050% ( 3) 00:07:39.044 10284.111 - 10334.523: 2.1376% ( 3) 00:07:39.044 10334.523 - 10384.935: 2.2244% ( 8) 00:07:39.044 10384.935 - 10435.348: 2.3329% ( 10) 00:07:39.044 10435.348 - 10485.760: 2.5391% ( 19) 00:07:39.044 10485.760 - 10536.172: 2.8103% ( 25) 00:07:39.044 10536.172 - 10586.585: 3.1793% ( 34) 00:07:39.044 10586.585 - 10636.997: 3.7109% ( 49) 00:07:39.044 10636.997 - 10687.409: 4.0690% ( 33) 00:07:39.044 10687.409 - 10737.822: 4.4596% ( 36) 00:07:39.044 10737.822 - 10788.234: 4.8937% ( 40) 00:07:39.044 10788.234 - 10838.646: 5.2626% ( 34) 00:07:39.044 10838.646 - 10889.058: 5.5990% ( 31) 00:07:39.044 10889.058 - 10939.471: 5.9462% ( 32) 00:07:39.044 10939.471 - 10989.883: 6.3151% ( 34) 00:07:39.044 10989.883 - 11040.295: 6.7057% ( 36) 00:07:39.044 11040.295 - 11090.708: 7.4436% ( 68) 00:07:39.044 11090.708 - 11141.120: 8.0187% ( 53) 00:07:39.044 11141.120 - 11191.532: 8.5286% ( 47) 00:07:39.044 11191.532 - 11241.945: 9.0603% ( 49) 00:07:39.044 11241.945 - 11292.357: 9.6897% ( 58) 00:07:39.044 11292.357 - 11342.769: 10.3841% ( 64) 00:07:39.044 11342.769 - 11393.182: 11.2522% ( 80) 00:07:39.044 11393.182 - 11443.594: 11.8707% ( 57) 00:07:39.044 11443.594 - 11494.006: 12.6302% ( 70) 00:07:39.044 11494.006 - 11544.418: 13.4657% ( 77) 00:07:39.044 11544.418 - 11594.831: 14.2687% ( 74) 00:07:39.044 11594.831 - 11645.243: 15.0065% ( 68) 00:07:39.044 11645.243 - 11695.655: 15.7661% ( 70) 00:07:39.044 11695.655 - 11746.068: 16.4931% ( 67) 00:07:39.044 11746.068 - 11796.480: 17.3720% ( 81) 00:07:39.044 11796.480 - 11846.892: 18.2075% ( 77) 00:07:39.044 11846.892 - 11897.305: 19.1840% ( 90) 00:07:39.044 11897.305 - 11947.717: 19.9327% ( 69) 00:07:39.044 11947.717 - 11998.129: 20.5187% ( 54) 00:07:39.044 11998.129 - 12048.542: 21.2348% ( 66) 00:07:39.044 12048.542 - 12098.954: 21.9727% ( 68) 00:07:39.044 12098.954 - 12149.366: 22.7214% ( 69) 00:07:39.044 12149.366 - 12199.778: 23.3941% ( 62) 00:07:39.044 12199.778 - 12250.191: 24.0668% ( 62) 00:07:39.044 12250.191 - 12300.603: 24.9891% ( 85) 00:07:39.044 12300.603 - 12351.015: 25.9223% ( 86) 00:07:39.044 12351.015 - 12401.428: 26.6602% ( 68) 00:07:39.044 12401.428 - 12451.840: 27.5933% ( 86) 00:07:39.044 12451.840 - 12502.252: 28.6133% ( 94) 00:07:39.044 12502.252 - 12552.665: 29.3945% ( 72) 00:07:39.044 12552.665 - 12603.077: 30.1975% ( 74) 00:07:39.044 12603.077 - 12653.489: 31.2174% ( 94) 00:07:39.044 12653.489 - 12703.902: 32.1723% ( 88) 00:07:39.044 12703.902 - 12754.314: 32.9861% ( 75) 00:07:39.044 12754.314 - 12804.726: 33.9735% ( 91) 00:07:39.044 12804.726 - 12855.138: 34.7982% ( 76) 00:07:39.044 12855.138 - 12905.551: 35.8941% ( 101) 00:07:39.044 12905.551 - 13006.375: 37.8581% ( 181) 00:07:39.044 13006.375 - 13107.200: 39.6593% ( 166) 00:07:39.044 13107.200 - 13208.025: 41.8077% ( 198) 00:07:39.044 13208.025 - 13308.849: 43.6957% ( 174) 00:07:39.044 13308.849 - 13409.674: 46.1806% ( 229) 00:07:39.044 13409.674 - 13510.498: 48.2747% ( 193) 00:07:39.044 13510.498 - 13611.323: 50.6185% ( 216) 00:07:39.044 13611.323 - 13712.148: 52.4957% ( 173) 00:07:39.044 13712.148 - 13812.972: 54.1992% ( 157) 00:07:39.044 13812.972 - 13913.797: 56.6298% ( 224) 00:07:39.044 13913.797 - 14014.622: 58.3333% ( 157) 00:07:39.044 14014.622 - 14115.446: 59.8850% ( 143) 00:07:39.044 14115.446 - 14216.271: 61.6536% ( 163) 00:07:39.044 14216.271 - 14317.095: 63.2161% ( 144) 00:07:39.044 14317.095 - 14417.920: 64.9414% ( 159) 00:07:39.044 14417.920 - 14518.745: 66.1675% ( 113) 00:07:39.044 14518.745 - 14619.569: 67.2418% ( 99) 00:07:39.044 14619.569 - 14720.394: 68.0990% ( 79) 00:07:39.044 14720.394 - 14821.218: 69.1623% ( 98) 00:07:39.044 14821.218 - 14922.043: 70.1172% ( 88) 00:07:39.044 14922.043 - 15022.868: 71.3759% ( 116) 00:07:39.044 15022.868 - 15123.692: 72.6345% ( 116) 00:07:39.044 15123.692 - 15224.517: 74.0885% ( 134) 00:07:39.044 15224.517 - 15325.342: 75.6293% ( 142) 00:07:39.044 15325.342 - 15426.166: 77.3655% ( 160) 00:07:39.044 15426.166 - 15526.991: 79.4054% ( 188) 00:07:39.044 15526.991 - 15627.815: 81.1415% ( 160) 00:07:39.044 15627.815 - 15728.640: 82.7040% ( 144) 00:07:39.044 15728.640 - 15829.465: 84.5161% ( 167) 00:07:39.044 15829.465 - 15930.289: 85.7530% ( 114) 00:07:39.044 15930.289 - 16031.114: 86.7405% ( 91) 00:07:39.044 16031.114 - 16131.938: 87.5977% ( 79) 00:07:39.044 16131.938 - 16232.763: 88.3464% ( 69) 00:07:39.044 16232.763 - 16333.588: 89.0299% ( 63) 00:07:39.044 16333.588 - 16434.412: 89.6376% ( 56) 00:07:39.044 16434.412 - 16535.237: 90.1150% ( 44) 00:07:39.044 16535.237 - 16636.062: 90.6467% ( 49) 00:07:39.044 16636.062 - 16736.886: 91.3411% ( 64) 00:07:39.044 16736.886 - 16837.711: 92.1224% ( 72) 00:07:39.044 16837.711 - 16938.535: 92.8819% ( 70) 00:07:39.044 16938.535 - 17039.360: 93.6198% ( 68) 00:07:39.044 17039.360 - 17140.185: 94.1298% ( 47) 00:07:39.044 17140.185 - 17241.009: 94.5421% ( 38) 00:07:39.044 17241.009 - 17341.834: 95.0955% ( 51) 00:07:39.044 17341.834 - 17442.658: 95.6380% ( 50) 00:07:39.044 17442.658 - 17543.483: 96.1806% ( 50) 00:07:39.044 17543.483 - 17644.308: 96.4844% ( 28) 00:07:39.044 17644.308 - 17745.132: 96.6905% ( 19) 00:07:39.044 17745.132 - 17845.957: 96.8641% ( 16) 00:07:39.044 17845.957 - 17946.782: 97.0269% ( 15) 00:07:39.044 17946.782 - 18047.606: 97.2005% ( 16) 00:07:39.044 18047.606 - 18148.431: 97.3741% ( 16) 00:07:39.044 18148.431 - 18249.255: 97.5152% ( 13) 00:07:39.044 18249.255 - 18350.080: 97.5911% ( 7) 00:07:39.044 18350.080 - 18450.905: 97.6780% ( 8) 00:07:39.044 18450.905 - 18551.729: 97.7322% ( 5) 00:07:39.044 18551.729 - 18652.554: 97.8841% ( 14) 00:07:39.044 18652.554 - 18753.378: 98.0143% ( 12) 00:07:39.044 18753.378 - 18854.203: 98.3398% ( 30) 00:07:39.044 18854.203 - 18955.028: 98.4592% ( 11) 00:07:39.044 18955.028 - 19055.852: 98.5352% ( 7) 00:07:39.044 19055.852 - 19156.677: 98.5894% ( 5) 00:07:39.044 19156.677 - 19257.502: 98.6111% ( 2) 00:07:39.044 19559.975 - 19660.800: 98.6871% ( 7) 00:07:39.044 19660.800 - 19761.625: 98.7522% ( 6) 00:07:39.044 19761.625 - 19862.449: 98.8064% ( 5) 00:07:39.044 19862.449 - 19963.274: 98.8281% ( 2) 00:07:39.044 19963.274 - 20064.098: 98.8715% ( 4) 00:07:39.044 20064.098 - 20164.923: 98.9149% ( 4) 00:07:39.044 20164.923 - 20265.748: 98.9692% ( 5) 00:07:39.044 20265.748 - 20366.572: 99.0126% ( 4) 00:07:39.044 20366.572 - 20467.397: 99.0560% ( 4) 00:07:39.044 20467.397 - 20568.222: 99.1102% ( 5) 00:07:39.044 20568.222 - 20669.046: 99.1536% ( 4) 00:07:39.044 20669.046 - 20769.871: 99.2079% ( 5) 00:07:39.044 20769.871 - 20870.695: 99.2513% ( 4) 00:07:39.044 20870.695 - 20971.520: 99.2947% ( 4) 00:07:39.044 20971.520 - 21072.345: 99.3056% ( 1) 00:07:39.044 23794.609 - 23895.434: 99.3381% ( 3) 00:07:39.044 23895.434 - 23996.258: 99.3815% ( 4) 00:07:39.044 23996.258 - 24097.083: 99.4249% ( 4) 00:07:39.044 24097.083 - 24197.908: 99.4683% ( 4) 00:07:39.044 24197.908 - 24298.732: 99.5117% ( 4) 00:07:39.044 24298.732 - 24399.557: 99.5551% ( 4) 00:07:39.044 24399.557 - 24500.382: 99.6094% ( 5) 00:07:39.044 24500.382 - 24601.206: 99.6528% ( 4) 00:07:39.044 24601.206 - 24702.031: 99.6962% ( 4) 00:07:39.044 24702.031 - 24802.855: 99.7396% ( 4) 00:07:39.044 24802.855 - 24903.680: 99.7830% ( 4) 00:07:39.044 24903.680 - 25004.505: 99.8264% ( 4) 00:07:39.044 25004.505 - 25105.329: 99.8698% ( 4) 00:07:39.044 25105.329 - 25206.154: 99.9240% ( 5) 00:07:39.044 25206.154 - 25306.978: 99.9674% ( 4) 00:07:39.044 25306.978 - 25407.803: 100.0000% ( 3) 00:07:39.044 00:07:39.044 19:57:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:39.044 00:07:39.044 real 0m2.540s 00:07:39.044 user 0m2.205s 00:07:39.044 sys 0m0.210s 00:07:39.044 19:57:12 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.044 19:57:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.044 ************************************ 00:07:39.044 END TEST nvme_perf 00:07:39.044 ************************************ 00:07:39.307 19:57:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:39.307 19:57:12 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:39.307 19:57:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.307 19:57:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.307 ************************************ 00:07:39.307 START TEST nvme_hello_world 00:07:39.307 ************************************ 00:07:39.307 19:57:12 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:39.307 Initializing NVMe Controllers 00:07:39.307 Attached to 0000:00:13.0 00:07:39.307 Namespace ID: 1 size: 1GB 00:07:39.307 Attached to 0000:00:10.0 00:07:39.307 Namespace ID: 1 size: 6GB 00:07:39.307 Attached to 0000:00:11.0 00:07:39.307 Namespace ID: 1 size: 5GB 00:07:39.307 Attached to 0000:00:12.0 00:07:39.307 Namespace ID: 1 size: 4GB 00:07:39.307 Namespace ID: 2 size: 4GB 00:07:39.307 Namespace ID: 3 size: 4GB 00:07:39.307 Initialization complete. 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.307 INFO: using host memory buffer for IO 00:07:39.307 Hello world! 00:07:39.569 00:07:39.569 real 0m0.243s 00:07:39.569 user 0m0.094s 00:07:39.569 sys 0m0.103s 00:07:39.569 ************************************ 00:07:39.569 19:57:13 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.569 19:57:13 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:39.569 END TEST nvme_hello_world 00:07:39.569 ************************************ 00:07:39.569 19:57:13 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:39.569 19:57:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.569 19:57:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.569 19:57:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.569 ************************************ 00:07:39.569 START TEST nvme_sgl 00:07:39.569 ************************************ 00:07:39.569 19:57:13 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:39.832 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:39.832 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:39.832 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:39.832 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:39.832 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:39.832 NVMe Readv/Writev Request test 00:07:39.832 Attached to 0000:00:13.0 00:07:39.832 Attached to 0000:00:10.0 00:07:39.832 Attached to 0000:00:11.0 00:07:39.832 Attached to 0000:00:12.0 00:07:39.832 0000:00:10.0: build_io_request_2 test passed 00:07:39.832 0000:00:10.0: build_io_request_4 test passed 00:07:39.832 0000:00:10.0: build_io_request_5 test passed 00:07:39.832 0000:00:10.0: build_io_request_6 test passed 00:07:39.832 0000:00:10.0: build_io_request_7 test passed 00:07:39.832 0000:00:10.0: build_io_request_10 test passed 00:07:39.832 0000:00:11.0: build_io_request_2 test passed 00:07:39.832 0000:00:11.0: build_io_request_4 test passed 00:07:39.832 0000:00:11.0: build_io_request_5 test passed 00:07:39.832 0000:00:11.0: build_io_request_6 test passed 00:07:39.832 0000:00:11.0: build_io_request_7 test passed 00:07:39.832 0000:00:11.0: build_io_request_10 test passed 00:07:39.832 Cleaning up... 00:07:39.832 00:07:39.832 real 0m0.321s 00:07:39.832 user 0m0.163s 00:07:39.832 sys 0m0.107s 00:07:39.832 ************************************ 00:07:39.832 END TEST nvme_sgl 00:07:39.832 ************************************ 00:07:39.832 19:57:13 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.832 19:57:13 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:39.832 19:57:13 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:39.832 19:57:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.832 19:57:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.832 19:57:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.832 ************************************ 00:07:39.832 START TEST nvme_e2edp 00:07:39.832 ************************************ 00:07:39.832 19:57:13 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:40.093 NVMe Write/Read with End-to-End data protection test 00:07:40.093 Attached to 0000:00:13.0 00:07:40.093 Attached to 0000:00:10.0 00:07:40.093 Attached to 0000:00:11.0 00:07:40.093 Attached to 0000:00:12.0 00:07:40.093 Cleaning up... 00:07:40.093 00:07:40.093 real 0m0.240s 00:07:40.093 user 0m0.071s 00:07:40.093 sys 0m0.112s 00:07:40.093 19:57:13 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.093 ************************************ 00:07:40.093 END TEST nvme_e2edp 00:07:40.093 ************************************ 00:07:40.093 19:57:13 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:40.093 19:57:13 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:40.093 19:57:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.093 19:57:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.093 19:57:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.355 ************************************ 00:07:40.355 START TEST nvme_reserve 00:07:40.355 ************************************ 00:07:40.355 19:57:13 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:40.355 ===================================================== 00:07:40.355 NVMe Controller at PCI bus 0, device 19, function 0 00:07:40.355 ===================================================== 00:07:40.355 Reservations: Not Supported 00:07:40.355 ===================================================== 00:07:40.355 NVMe Controller at PCI bus 0, device 16, function 0 00:07:40.355 ===================================================== 00:07:40.355 Reservations: Not Supported 00:07:40.355 ===================================================== 00:07:40.355 NVMe Controller at PCI bus 0, device 17, function 0 00:07:40.355 ===================================================== 00:07:40.355 Reservations: Not Supported 00:07:40.355 ===================================================== 00:07:40.355 NVMe Controller at PCI bus 0, device 18, function 0 00:07:40.355 ===================================================== 00:07:40.355 Reservations: Not Supported 00:07:40.355 Reservation test passed 00:07:40.355 00:07:40.355 real 0m0.230s 00:07:40.355 user 0m0.069s 00:07:40.355 sys 0m0.117s 00:07:40.355 19:57:14 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.355 19:57:14 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:40.355 ************************************ 00:07:40.355 END TEST nvme_reserve 00:07:40.355 ************************************ 00:07:40.617 19:57:14 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:40.617 19:57:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.617 19:57:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.617 19:57:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.617 ************************************ 00:07:40.617 START TEST nvme_err_injection 00:07:40.617 ************************************ 00:07:40.617 19:57:14 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:40.617 NVMe Error Injection test 00:07:40.617 Attached to 0000:00:13.0 00:07:40.617 Attached to 0000:00:10.0 00:07:40.617 Attached to 0000:00:11.0 00:07:40.617 Attached to 0000:00:12.0 00:07:40.617 0000:00:10.0: get features failed as expected 00:07:40.617 0000:00:11.0: get features failed as expected 00:07:40.617 0000:00:12.0: get features failed as expected 00:07:40.617 0000:00:13.0: get features failed as expected 00:07:40.617 0000:00:12.0: get features successfully as expected 00:07:40.617 0000:00:13.0: get features successfully as expected 00:07:40.617 0000:00:10.0: get features successfully as expected 00:07:40.617 0000:00:11.0: get features successfully as expected 00:07:40.617 0000:00:12.0: read failed as expected 00:07:40.617 0000:00:13.0: read failed as expected 00:07:40.617 0000:00:10.0: read failed as expected 00:07:40.617 0000:00:11.0: read failed as expected 00:07:40.617 0000:00:12.0: read successfully as expected 00:07:40.617 0000:00:13.0: read successfully as expected 00:07:40.617 0000:00:10.0: read successfully as expected 00:07:40.617 0000:00:11.0: read successfully as expected 00:07:40.617 Cleaning up... 00:07:40.618 00:07:40.618 real 0m0.209s 00:07:40.618 user 0m0.084s 00:07:40.618 sys 0m0.090s 00:07:40.618 19:57:14 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.618 ************************************ 00:07:40.618 END TEST nvme_err_injection 00:07:40.618 ************************************ 00:07:40.618 19:57:14 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:40.879 19:57:14 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:40.879 19:57:14 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:40.879 19:57:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.879 19:57:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.879 ************************************ 00:07:40.879 START TEST nvme_overhead 00:07:40.879 ************************************ 00:07:40.879 19:57:14 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.264 Initializing NVMe Controllers 00:07:42.264 Attached to 0000:00:13.0 00:07:42.264 Attached to 0000:00:10.0 00:07:42.264 Attached to 0000:00:11.0 00:07:42.264 Attached to 0000:00:12.0 00:07:42.264 Initialization complete. Launching workers. 00:07:42.264 submit (in ns) avg, min, max = 12283.3, 9966.9, 279441.5 00:07:42.264 complete (in ns) avg, min, max = 7809.1, 7302.3, 112899.2 00:07:42.264 00:07:42.264 Submit histogram 00:07:42.264 ================ 00:07:42.264 Range in us Cumulative Count 00:07:42.264 9.945 - 9.994: 0.0311% ( 1) 00:07:42.264 10.338 - 10.388: 0.0622% ( 1) 00:07:42.264 10.437 - 10.486: 0.0932% ( 1) 00:07:42.264 10.486 - 10.535: 0.1865% ( 3) 00:07:42.264 10.535 - 10.585: 0.2486% ( 2) 00:07:42.264 10.634 - 10.683: 0.2797% ( 1) 00:07:42.264 10.683 - 10.732: 0.3418% ( 2) 00:07:42.264 10.732 - 10.782: 0.4040% ( 2) 00:07:42.264 10.782 - 10.831: 0.4351% ( 1) 00:07:42.264 10.831 - 10.880: 0.5283% ( 3) 00:07:42.264 10.880 - 10.929: 0.5904% ( 2) 00:07:42.264 10.929 - 10.978: 0.8701% ( 9) 00:07:42.264 10.978 - 11.028: 1.3362% ( 15) 00:07:42.264 11.028 - 11.077: 2.2996% ( 31) 00:07:42.264 11.077 - 11.126: 4.1641% ( 60) 00:07:42.264 11.126 - 11.175: 8.1417% ( 128) 00:07:42.264 11.175 - 11.225: 16.4077% ( 266) 00:07:42.264 11.225 - 11.274: 26.0099% ( 309) 00:07:42.264 11.274 - 11.323: 37.8807% ( 382) 00:07:42.264 11.323 - 11.372: 48.4773% ( 341) 00:07:42.264 11.372 - 11.422: 58.3282% ( 317) 00:07:42.264 11.422 - 11.471: 65.8794% ( 243) 00:07:42.264 11.471 - 11.520: 71.1001% ( 168) 00:07:42.264 11.520 - 11.569: 74.8912% ( 122) 00:07:42.264 11.569 - 11.618: 77.9055% ( 97) 00:07:42.264 11.618 - 11.668: 80.2362% ( 75) 00:07:42.264 11.668 - 11.717: 81.7589% ( 49) 00:07:42.264 11.717 - 11.766: 82.6911% ( 30) 00:07:42.264 11.766 - 11.815: 83.4058% ( 23) 00:07:42.264 11.815 - 11.865: 84.2449% ( 27) 00:07:42.264 11.865 - 11.914: 84.7421% ( 16) 00:07:42.264 11.914 - 11.963: 85.0218% ( 9) 00:07:42.264 11.963 - 12.012: 85.1771% ( 5) 00:07:42.264 12.012 - 12.062: 85.5190% ( 11) 00:07:42.264 12.062 - 12.111: 85.8608% ( 11) 00:07:42.264 12.111 - 12.160: 86.0783% ( 7) 00:07:42.264 12.160 - 12.209: 86.4201% ( 11) 00:07:42.264 12.209 - 12.258: 86.5444% ( 4) 00:07:42.264 12.258 - 12.308: 86.7309% ( 6) 00:07:42.264 12.308 - 12.357: 86.8241% ( 3) 00:07:42.264 12.357 - 12.406: 86.9173% ( 3) 00:07:42.264 12.406 - 12.455: 87.0416% ( 4) 00:07:42.264 12.455 - 12.505: 87.0727% ( 1) 00:07:42.264 12.505 - 12.554: 87.2592% ( 6) 00:07:42.264 12.554 - 12.603: 87.3835% ( 4) 00:07:42.264 12.603 - 12.702: 87.5699% ( 6) 00:07:42.264 12.702 - 12.800: 87.6942% ( 4) 00:07:42.264 12.800 - 12.898: 87.9428% ( 8) 00:07:42.264 12.898 - 12.997: 88.2225% ( 9) 00:07:42.264 12.997 - 13.095: 88.4400% ( 7) 00:07:42.264 13.095 - 13.194: 88.5022% ( 2) 00:07:42.264 13.194 - 13.292: 88.6265% ( 4) 00:07:42.264 13.292 - 13.391: 88.7508% ( 4) 00:07:42.264 13.391 - 13.489: 88.8440% ( 3) 00:07:42.264 13.489 - 13.588: 88.9994% ( 5) 00:07:42.264 13.588 - 13.686: 89.0615% ( 2) 00:07:42.264 13.686 - 13.785: 89.1858% ( 4) 00:07:42.264 13.785 - 13.883: 89.2480% ( 2) 00:07:42.264 13.883 - 13.982: 89.4034% ( 5) 00:07:42.264 13.982 - 14.080: 89.5898% ( 6) 00:07:42.264 14.080 - 14.178: 89.6830% ( 3) 00:07:42.264 14.178 - 14.277: 89.9938% ( 10) 00:07:42.264 14.277 - 14.375: 90.2113% ( 7) 00:07:42.264 14.375 - 14.474: 90.3667% ( 5) 00:07:42.264 14.474 - 14.572: 90.4910% ( 4) 00:07:42.264 14.572 - 14.671: 90.8639% ( 12) 00:07:42.264 14.671 - 14.769: 91.0503% ( 6) 00:07:42.264 14.769 - 14.868: 91.1436% ( 3) 00:07:42.264 14.868 - 14.966: 91.3922% ( 8) 00:07:42.264 14.966 - 15.065: 91.4543% ( 2) 00:07:42.264 15.065 - 15.163: 91.5786% ( 4) 00:07:42.264 15.163 - 15.262: 91.6097% ( 1) 00:07:42.264 15.262 - 15.360: 91.7340% ( 4) 00:07:42.264 15.360 - 15.458: 91.8894% ( 5) 00:07:42.264 15.458 - 15.557: 91.9515% ( 2) 00:07:42.264 15.557 - 15.655: 92.1690% ( 7) 00:07:42.264 15.655 - 15.754: 92.2623% ( 3) 00:07:42.264 15.754 - 15.852: 92.4487% ( 6) 00:07:42.264 15.852 - 15.951: 92.6352% ( 6) 00:07:42.264 15.951 - 16.049: 92.8838% ( 8) 00:07:42.264 16.049 - 16.148: 93.0702% ( 6) 00:07:42.264 16.148 - 16.246: 93.3188% ( 8) 00:07:42.264 16.246 - 16.345: 93.5985% ( 9) 00:07:42.264 16.345 - 16.443: 94.0025% ( 13) 00:07:42.264 16.443 - 16.542: 94.1889% ( 6) 00:07:42.264 16.542 - 16.640: 94.5618% ( 12) 00:07:42.264 16.640 - 16.738: 94.7794% ( 7) 00:07:42.264 16.738 - 16.837: 95.1212% ( 11) 00:07:42.264 16.837 - 16.935: 95.2455% ( 4) 00:07:42.264 16.935 - 17.034: 95.5562% ( 10) 00:07:42.264 17.034 - 17.132: 95.7427% ( 6) 00:07:42.264 17.132 - 17.231: 96.1156% ( 12) 00:07:42.264 17.231 - 17.329: 96.3021% ( 6) 00:07:42.264 17.329 - 17.428: 96.3953% ( 3) 00:07:42.264 17.428 - 17.526: 96.4885% ( 3) 00:07:42.264 17.526 - 17.625: 96.6128% ( 4) 00:07:42.264 17.625 - 17.723: 96.6750% ( 2) 00:07:42.264 17.723 - 17.822: 96.9546% ( 9) 00:07:42.264 17.822 - 17.920: 97.0789% ( 4) 00:07:42.264 17.920 - 18.018: 97.1722% ( 3) 00:07:42.264 18.018 - 18.117: 97.2343% ( 2) 00:07:42.264 18.117 - 18.215: 97.4208% ( 6) 00:07:42.264 18.215 - 18.314: 97.4518% ( 1) 00:07:42.264 18.314 - 18.412: 97.4829% ( 1) 00:07:42.264 18.412 - 18.511: 97.5761% ( 3) 00:07:42.264 18.511 - 18.609: 97.8247% ( 8) 00:07:42.264 18.609 - 18.708: 97.9801% ( 5) 00:07:42.264 18.708 - 18.806: 98.0423% ( 2) 00:07:42.264 18.806 - 18.905: 98.1976% ( 5) 00:07:42.264 18.905 - 19.003: 98.2909% ( 3) 00:07:42.264 19.003 - 19.102: 98.3530% ( 2) 00:07:42.264 19.102 - 19.200: 98.3841% ( 1) 00:07:42.264 19.200 - 19.298: 98.4462% ( 2) 00:07:42.264 19.298 - 19.397: 98.4773% ( 1) 00:07:42.265 19.692 - 19.791: 98.5395% ( 2) 00:07:42.265 19.791 - 19.889: 98.5705% ( 1) 00:07:42.265 19.889 - 19.988: 98.6638% ( 3) 00:07:42.265 19.988 - 20.086: 98.7570% ( 3) 00:07:42.265 20.185 - 20.283: 98.8191% ( 2) 00:07:42.265 20.283 - 20.382: 99.0056% ( 6) 00:07:42.265 20.382 - 20.480: 99.0367% ( 1) 00:07:42.265 20.480 - 20.578: 99.0988% ( 2) 00:07:42.265 20.677 - 20.775: 99.1610% ( 2) 00:07:42.265 20.874 - 20.972: 99.1920% ( 1) 00:07:42.265 21.169 - 21.268: 99.2231% ( 1) 00:07:42.265 21.268 - 21.366: 99.3163% ( 3) 00:07:42.265 21.563 - 21.662: 99.4717% ( 5) 00:07:42.265 21.957 - 22.055: 99.5028% ( 1) 00:07:42.265 23.335 - 23.434: 99.5339% ( 1) 00:07:42.265 23.926 - 24.025: 99.5649% ( 1) 00:07:42.265 26.388 - 26.585: 99.5960% ( 1) 00:07:42.265 29.538 - 29.735: 99.6271% ( 1) 00:07:42.265 38.006 - 38.203: 99.6582% ( 1) 00:07:42.265 38.203 - 38.400: 99.6892% ( 1) 00:07:42.265 46.868 - 47.065: 99.7203% ( 1) 00:07:42.265 53.169 - 53.563: 99.7514% ( 1) 00:07:42.265 54.351 - 54.745: 99.7825% ( 1) 00:07:42.265 54.745 - 55.138: 99.8135% ( 1) 00:07:42.265 55.138 - 55.532: 99.8446% ( 1) 00:07:42.265 60.652 - 61.046: 99.8757% ( 1) 00:07:42.265 74.831 - 75.225: 99.9068% ( 1) 00:07:42.265 79.557 - 79.951: 99.9378% ( 1) 00:07:42.265 242.609 - 244.185: 99.9689% ( 1) 00:07:42.265 278.843 - 280.418: 100.0000% ( 1) 00:07:42.265 00:07:42.265 Complete histogram 00:07:42.265 ================== 00:07:42.265 Range in us Cumulative Count 00:07:42.265 7.286 - 7.335: 0.0622% ( 2) 00:07:42.265 7.335 - 7.385: 1.8334% ( 57) 00:07:42.265 7.385 - 7.434: 11.3735% ( 307) 00:07:42.265 7.434 - 7.483: 31.0131% ( 632) 00:07:42.265 7.483 - 7.532: 54.3195% ( 750) 00:07:42.265 7.532 - 7.582: 71.0690% ( 539) 00:07:42.265 7.582 - 7.631: 80.9820% ( 319) 00:07:42.265 7.631 - 7.680: 86.5444% ( 179) 00:07:42.265 7.680 - 7.729: 89.0926% ( 82) 00:07:42.265 7.729 - 7.778: 91.2368% ( 69) 00:07:42.265 7.778 - 7.828: 92.8216% ( 51) 00:07:42.265 7.828 - 7.877: 93.6917% ( 28) 00:07:42.265 7.877 - 7.926: 94.3443% ( 21) 00:07:42.265 7.926 - 7.975: 94.7172% ( 12) 00:07:42.265 7.975 - 8.025: 94.9037% ( 6) 00:07:42.265 8.025 - 8.074: 95.0280% ( 4) 00:07:42.265 8.074 - 8.123: 95.2144% ( 6) 00:07:42.265 8.123 - 8.172: 95.4319% ( 7) 00:07:42.265 8.172 - 8.222: 95.6805% ( 8) 00:07:42.265 8.222 - 8.271: 96.0224% ( 11) 00:07:42.265 8.271 - 8.320: 96.2399% ( 7) 00:07:42.265 8.320 - 8.369: 96.4885% ( 8) 00:07:42.265 8.369 - 8.418: 96.7993% ( 10) 00:07:42.265 8.418 - 8.468: 97.0479% ( 8) 00:07:42.265 8.468 - 8.517: 97.2032% ( 5) 00:07:42.265 8.517 - 8.566: 97.3275% ( 4) 00:07:42.265 8.566 - 8.615: 97.4829% ( 5) 00:07:42.265 8.615 - 8.665: 97.5761% ( 3) 00:07:42.265 8.665 - 8.714: 97.6694% ( 3) 00:07:42.265 8.714 - 8.763: 97.7004% ( 1) 00:07:42.265 8.763 - 8.812: 97.7626% ( 2) 00:07:42.265 9.354 - 9.403: 97.7937% ( 1) 00:07:42.265 9.748 - 9.797: 97.8247% ( 1) 00:07:42.265 9.895 - 9.945: 97.8558% ( 1) 00:07:42.265 10.043 - 10.092: 97.8869% ( 1) 00:07:42.265 10.142 - 10.191: 97.9180% ( 1) 00:07:42.265 10.486 - 10.535: 97.9490% ( 1) 00:07:42.265 11.126 - 11.175: 97.9801% ( 1) 00:07:42.265 11.766 - 11.815: 98.0112% ( 1) 00:07:42.265 12.012 - 12.062: 98.0423% ( 1) 00:07:42.265 12.898 - 12.997: 98.1044% ( 2) 00:07:42.265 12.997 - 13.095: 98.1976% ( 3) 00:07:42.265 13.095 - 13.194: 98.3219% ( 4) 00:07:42.265 13.194 - 13.292: 98.5084% ( 6) 00:07:42.265 13.292 - 13.391: 98.6016% ( 3) 00:07:42.265 13.391 - 13.489: 98.7881% ( 6) 00:07:42.265 13.489 - 13.588: 98.8502% ( 2) 00:07:42.265 13.588 - 13.686: 98.8813% ( 1) 00:07:42.265 13.686 - 13.785: 98.9745% ( 3) 00:07:42.265 13.785 - 13.883: 99.0056% ( 1) 00:07:42.265 13.883 - 13.982: 99.0367% ( 1) 00:07:42.265 13.982 - 14.080: 99.0677% ( 1) 00:07:42.265 14.080 - 14.178: 99.0988% ( 1) 00:07:42.265 14.277 - 14.375: 99.1299% ( 1) 00:07:42.265 14.868 - 14.966: 99.1610% ( 1) 00:07:42.265 15.360 - 15.458: 99.1920% ( 1) 00:07:42.265 15.754 - 15.852: 99.2231% ( 1) 00:07:42.265 16.738 - 16.837: 99.2542% ( 1) 00:07:42.265 16.837 - 16.935: 99.2853% ( 1) 00:07:42.265 17.526 - 17.625: 99.3474% ( 2) 00:07:42.265 18.018 - 18.117: 99.4096% ( 2) 00:07:42.265 18.117 - 18.215: 99.4406% ( 1) 00:07:42.265 18.314 - 18.412: 99.4717% ( 1) 00:07:42.265 18.609 - 18.708: 99.5028% ( 1) 00:07:42.265 18.708 - 18.806: 99.5339% ( 1) 00:07:42.265 19.003 - 19.102: 99.6271% ( 3) 00:07:42.265 19.889 - 19.988: 99.6892% ( 2) 00:07:42.265 23.040 - 23.138: 99.7203% ( 1) 00:07:42.265 27.175 - 27.372: 99.7514% ( 1) 00:07:42.265 31.114 - 31.311: 99.7825% ( 1) 00:07:42.265 31.508 - 31.705: 99.8135% ( 1) 00:07:42.265 38.794 - 38.991: 99.8446% ( 1) 00:07:42.265 39.975 - 40.172: 99.8757% ( 1) 00:07:42.265 45.883 - 46.080: 99.9068% ( 1) 00:07:42.265 51.988 - 52.382: 99.9378% ( 1) 00:07:42.265 53.957 - 54.351: 99.9689% ( 1) 00:07:42.265 112.640 - 113.428: 100.0000% ( 1) 00:07:42.265 00:07:42.265 00:07:42.265 real 0m1.234s 00:07:42.265 user 0m1.079s 00:07:42.265 sys 0m0.102s 00:07:42.265 19:57:15 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.265 ************************************ 00:07:42.265 END TEST nvme_overhead 00:07:42.265 ************************************ 00:07:42.265 19:57:15 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:42.265 19:57:15 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:42.265 19:57:15 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:42.265 19:57:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.265 19:57:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.265 ************************************ 00:07:42.265 START TEST nvme_arbitration 00:07:42.265 ************************************ 00:07:42.265 19:57:15 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:45.562 Initializing NVMe Controllers 00:07:45.562 Attached to 0000:00:13.0 00:07:45.562 Attached to 0000:00:10.0 00:07:45.562 Attached to 0000:00:11.0 00:07:45.562 Attached to 0000:00:12.0 00:07:45.562 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:45.562 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:45.562 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:45.562 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:45.562 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:45.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:45.563 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:45.563 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:45.563 Initialization complete. Launching workers. 00:07:45.563 Starting thread on core 1 with urgent priority queue 00:07:45.563 Starting thread on core 2 with urgent priority queue 00:07:45.563 Starting thread on core 3 with urgent priority queue 00:07:45.563 Starting thread on core 0 with urgent priority queue 00:07:45.563 QEMU NVMe Ctrl (12343 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:45.563 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:45.563 QEMU NVMe Ctrl (12340 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:45.563 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:45.563 QEMU NVMe Ctrl (12341 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:45.563 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:07:45.563 ======================================================== 00:07:45.563 00:07:45.563 00:07:45.563 real 0m3.340s 00:07:45.563 user 0m9.311s 00:07:45.563 sys 0m0.113s 00:07:45.563 19:57:19 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.563 ************************************ 00:07:45.563 END TEST nvme_arbitration 00:07:45.563 ************************************ 00:07:45.563 19:57:19 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:45.563 19:57:19 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:45.563 19:57:19 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.563 19:57:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.563 19:57:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.563 ************************************ 00:07:45.563 START TEST nvme_single_aen 00:07:45.563 ************************************ 00:07:45.563 19:57:19 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:45.563 Asynchronous Event Request test 00:07:45.563 Attached to 0000:00:13.0 00:07:45.563 Attached to 0000:00:10.0 00:07:45.563 Attached to 0000:00:11.0 00:07:45.563 Attached to 0000:00:12.0 00:07:45.563 Reset controller to setup AER completions for this process 00:07:45.563 Registering asynchronous event callbacks... 00:07:45.563 Getting orig temperature thresholds of all controllers 00:07:45.563 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.563 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.563 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.563 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.563 Setting all controllers temperature threshold low to trigger AER 00:07:45.563 Waiting for all controllers temperature threshold to be set lower 00:07:45.563 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.563 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:45.563 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.563 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:45.563 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.563 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:45.563 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.563 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:45.563 Waiting for all controllers to trigger AER and reset threshold 00:07:45.563 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.563 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.563 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.563 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.563 Cleaning up... 00:07:45.563 00:07:45.563 real 0m0.212s 00:07:45.563 user 0m0.076s 00:07:45.563 sys 0m0.090s 00:07:45.563 19:57:19 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.563 ************************************ 00:07:45.563 END TEST nvme_single_aen 00:07:45.563 ************************************ 00:07:45.563 19:57:19 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:45.825 19:57:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:45.825 19:57:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.825 19:57:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.825 19:57:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.825 ************************************ 00:07:45.825 START TEST nvme_doorbell_aers 00:07:45.825 ************************************ 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:45.825 19:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:46.086 [2024-11-19 19:57:19.686811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:07:56.082 Executing: test_write_invalid_db 00:07:56.082 Waiting for AER completion... 00:07:56.082 Failure: test_write_invalid_db 00:07:56.082 00:07:56.082 Executing: test_invalid_db_write_overflow_sq 00:07:56.082 Waiting for AER completion... 00:07:56.082 Failure: test_invalid_db_write_overflow_sq 00:07:56.082 00:07:56.082 Executing: test_invalid_db_write_overflow_cq 00:07:56.082 Waiting for AER completion... 00:07:56.082 Failure: test_invalid_db_write_overflow_cq 00:07:56.082 00:07:56.082 19:57:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:56.082 19:57:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:56.082 [2024-11-19 19:57:29.742346] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:06.053 Executing: test_write_invalid_db 00:08:06.053 Waiting for AER completion... 00:08:06.053 Failure: test_write_invalid_db 00:08:06.053 00:08:06.053 Executing: test_invalid_db_write_overflow_sq 00:08:06.053 Waiting for AER completion... 00:08:06.053 Failure: test_invalid_db_write_overflow_sq 00:08:06.053 00:08:06.053 Executing: test_invalid_db_write_overflow_cq 00:08:06.053 Waiting for AER completion... 00:08:06.053 Failure: test_invalid_db_write_overflow_cq 00:08:06.053 00:08:06.053 19:57:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.053 19:57:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:06.053 [2024-11-19 19:57:39.763910] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:16.024 Executing: test_write_invalid_db 00:08:16.024 Waiting for AER completion... 00:08:16.024 Failure: test_write_invalid_db 00:08:16.024 00:08:16.024 Executing: test_invalid_db_write_overflow_sq 00:08:16.024 Waiting for AER completion... 00:08:16.024 Failure: test_invalid_db_write_overflow_sq 00:08:16.024 00:08:16.024 Executing: test_invalid_db_write_overflow_cq 00:08:16.024 Waiting for AER completion... 00:08:16.024 Failure: test_invalid_db_write_overflow_cq 00:08:16.024 00:08:16.024 19:57:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.024 19:57:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:16.024 [2024-11-19 19:57:49.798122] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.028 Executing: test_write_invalid_db 00:08:26.028 Waiting for AER completion... 00:08:26.028 Failure: test_write_invalid_db 00:08:26.028 00:08:26.028 Executing: test_invalid_db_write_overflow_sq 00:08:26.028 Waiting for AER completion... 00:08:26.028 Failure: test_invalid_db_write_overflow_sq 00:08:26.028 00:08:26.028 Executing: test_invalid_db_write_overflow_cq 00:08:26.028 Waiting for AER completion... 00:08:26.028 Failure: test_invalid_db_write_overflow_cq 00:08:26.028 00:08:26.028 00:08:26.028 real 0m40.207s 00:08:26.028 user 0m34.170s 00:08:26.028 sys 0m5.648s 00:08:26.028 19:57:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.028 ************************************ 00:08:26.028 END TEST nvme_doorbell_aers 00:08:26.028 ************************************ 00:08:26.028 19:57:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:26.028 19:57:59 nvme -- nvme/nvme.sh@97 -- # uname 00:08:26.028 19:57:59 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:26.028 19:57:59 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.028 19:57:59 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.028 19:57:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.028 19:57:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.028 ************************************ 00:08:26.028 START TEST nvme_multi_aen 00:08:26.028 ************************************ 00:08:26.028 19:57:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.287 [2024-11-19 19:57:59.855678] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.855736] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.855746] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.858710] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.858825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.858857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.861171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.861270] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.861301] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.863550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.863626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 [2024-11-19 19:57:59.863654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63165) is not found. Dropping the request. 00:08:26.287 Child process pid: 63686 00:08:26.287 [Child] Asynchronous Event Request test 00:08:26.287 [Child] Attached to 0000:00:13.0 00:08:26.287 [Child] Attached to 0000:00:10.0 00:08:26.287 [Child] Attached to 0000:00:11.0 00:08:26.287 [Child] Attached to 0000:00:12.0 00:08:26.287 [Child] Registering asynchronous event callbacks... 00:08:26.287 [Child] Getting orig temperature thresholds of all controllers 00:08:26.287 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.287 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.287 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.287 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.287 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:26.287 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.287 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.287 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.287 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.287 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.287 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.287 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.287 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.287 [Child] Cleaning up... 00:08:26.546 Asynchronous Event Request test 00:08:26.546 Attached to 0000:00:13.0 00:08:26.546 Attached to 0000:00:10.0 00:08:26.546 Attached to 0000:00:11.0 00:08:26.546 Attached to 0000:00:12.0 00:08:26.546 Reset controller to setup AER completions for this process 00:08:26.546 Registering asynchronous event callbacks... 00:08:26.546 Getting orig temperature thresholds of all controllers 00:08:26.546 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.546 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.546 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.546 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.546 Setting all controllers temperature threshold low to trigger AER 00:08:26.546 Waiting for all controllers temperature threshold to be set lower 00:08:26.546 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.546 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:26.546 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.546 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:26.546 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.546 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:26.546 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.546 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:26.546 Waiting for all controllers to trigger AER and reset threshold 00:08:26.546 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.546 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.546 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.546 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.546 Cleaning up... 00:08:26.546 00:08:26.546 real 0m0.462s 00:08:26.546 user 0m0.161s 00:08:26.546 sys 0m0.195s 00:08:26.546 19:58:00 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.546 ************************************ 00:08:26.546 END TEST nvme_multi_aen 00:08:26.546 ************************************ 00:08:26.546 19:58:00 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:26.546 19:58:00 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:26.546 19:58:00 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:26.546 19:58:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.546 19:58:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.546 ************************************ 00:08:26.546 START TEST nvme_startup 00:08:26.546 ************************************ 00:08:26.546 19:58:00 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:26.804 Initializing NVMe Controllers 00:08:26.804 Attached to 0000:00:13.0 00:08:26.804 Attached to 0000:00:10.0 00:08:26.804 Attached to 0000:00:11.0 00:08:26.804 Attached to 0000:00:12.0 00:08:26.804 Initialization complete. 00:08:26.804 Time used:158751.922 (us). 00:08:26.804 00:08:26.804 real 0m0.224s 00:08:26.804 user 0m0.071s 00:08:26.804 sys 0m0.107s 00:08:26.804 19:58:00 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.804 ************************************ 00:08:26.804 END TEST nvme_startup 00:08:26.804 ************************************ 00:08:26.804 19:58:00 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:26.804 19:58:00 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:26.804 19:58:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.804 19:58:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.804 19:58:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.804 ************************************ 00:08:26.804 START TEST nvme_multi_secondary 00:08:26.804 ************************************ 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63742 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63743 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:26.805 19:58:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:30.088 Initializing NVMe Controllers 00:08:30.088 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.088 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:30.088 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:30.088 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:30.088 Initialization complete. Launching workers. 00:08:30.088 ======================================================== 00:08:30.088 Latency(us) 00:08:30.088 Device Information : IOPS MiB/s Average min max 00:08:30.088 PCIE (0000:00:13.0) NSID 1 from core 2: 3325.36 12.99 4810.74 863.48 14280.72 00:08:30.088 PCIE (0000:00:10.0) NSID 1 from core 2: 3325.36 12.99 4809.57 850.84 14327.16 00:08:30.088 PCIE (0000:00:11.0) NSID 1 from core 2: 3325.36 12.99 4811.76 868.33 14496.40 00:08:30.088 PCIE (0000:00:12.0) NSID 1 from core 2: 3325.36 12.99 4811.93 875.73 14369.41 00:08:30.088 PCIE (0000:00:12.0) NSID 2 from core 2: 3325.36 12.99 4812.50 858.93 14073.14 00:08:30.088 PCIE (0000:00:12.0) NSID 3 from core 2: 3325.36 12.99 4812.73 874.56 13362.27 00:08:30.088 ======================================================== 00:08:30.088 Total : 19952.13 77.94 4811.54 850.84 14496.40 00:08:30.088 00:08:30.088 19:58:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63742 00:08:30.088 Initializing NVMe Controllers 00:08:30.088 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.088 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.088 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.088 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.088 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.088 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.088 Initialization complete. Launching workers. 00:08:30.088 ======================================================== 00:08:30.088 Latency(us) 00:08:30.088 Device Information : IOPS MiB/s Average min max 00:08:30.088 PCIE (0000:00:13.0) NSID 1 from core 1: 7762.96 30.32 2060.63 734.23 6821.94 00:08:30.088 PCIE (0000:00:10.0) NSID 1 from core 1: 7762.96 30.32 2059.62 715.95 6631.41 00:08:30.088 PCIE (0000:00:11.0) NSID 1 from core 1: 7762.96 30.32 2060.55 727.08 6485.81 00:08:30.088 PCIE (0000:00:12.0) NSID 1 from core 1: 7762.96 30.32 2060.48 727.52 7176.43 00:08:30.088 PCIE (0000:00:12.0) NSID 2 from core 1: 7762.96 30.32 2060.46 739.19 7167.32 00:08:30.088 PCIE (0000:00:12.0) NSID 3 from core 1: 7762.96 30.32 2060.43 665.45 7209.29 00:08:30.088 ======================================================== 00:08:30.088 Total : 46577.75 181.94 2060.36 665.45 7209.29 00:08:30.088 00:08:32.618 Initializing NVMe Controllers 00:08:32.618 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.618 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.618 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.618 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.618 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.618 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.618 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.618 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.618 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.618 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.618 Initialization complete. Launching workers. 00:08:32.618 ======================================================== 00:08:32.618 Latency(us) 00:08:32.618 Device Information : IOPS MiB/s Average min max 00:08:32.618 PCIE (0000:00:13.0) NSID 1 from core 0: 10738.94 41.95 1489.52 704.43 5894.62 00:08:32.619 PCIE (0000:00:10.0) NSID 1 from core 0: 10738.94 41.95 1488.71 686.38 6722.96 00:08:32.619 PCIE (0000:00:11.0) NSID 1 from core 0: 10738.94 41.95 1489.55 668.41 6873.92 00:08:32.619 PCIE (0000:00:12.0) NSID 1 from core 0: 10738.94 41.95 1489.53 694.63 6641.97 00:08:32.619 PCIE (0000:00:12.0) NSID 2 from core 0: 10738.94 41.95 1489.51 701.10 5999.94 00:08:32.619 PCIE (0000:00:12.0) NSID 3 from core 0: 10738.94 41.95 1489.49 702.25 5834.02 00:08:32.619 ======================================================== 00:08:32.619 Total : 64433.64 251.69 1489.39 668.41 6873.92 00:08:32.619 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63743 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63812 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63813 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:32.619 19:58:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:35.917 Initializing NVMe Controllers 00:08:35.917 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.917 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:35.917 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:35.917 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:35.917 Initialization complete. Launching workers. 00:08:35.917 ======================================================== 00:08:35.917 Latency(us) 00:08:35.917 Device Information : IOPS MiB/s Average min max 00:08:35.917 PCIE (0000:00:13.0) NSID 1 from core 1: 4974.36 19.43 3216.06 714.97 14377.99 00:08:35.917 PCIE (0000:00:10.0) NSID 1 from core 1: 4974.36 19.43 3215.04 696.59 14242.14 00:08:35.917 PCIE (0000:00:11.0) NSID 1 from core 1: 4974.36 19.43 3216.57 709.23 14824.56 00:08:35.917 PCIE (0000:00:12.0) NSID 1 from core 1: 4974.36 19.43 3217.53 720.57 14666.21 00:08:35.917 PCIE (0000:00:12.0) NSID 2 from core 1: 4974.36 19.43 3217.99 720.29 12687.36 00:08:35.917 PCIE (0000:00:12.0) NSID 3 from core 1: 4974.36 19.43 3218.35 723.01 13567.72 00:08:35.917 ======================================================== 00:08:35.917 Total : 29846.16 116.59 3216.92 696.59 14824.56 00:08:35.917 00:08:35.917 Initializing NVMe Controllers 00:08:35.917 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.917 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.917 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:35.917 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:35.917 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:35.917 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:35.917 Initialization complete. Launching workers. 00:08:35.917 ======================================================== 00:08:35.917 Latency(us) 00:08:35.917 Device Information : IOPS MiB/s Average min max 00:08:35.917 PCIE (0000:00:13.0) NSID 1 from core 0: 4215.96 16.47 3794.60 800.59 13780.22 00:08:35.917 PCIE (0000:00:10.0) NSID 1 from core 0: 4215.96 16.47 3793.96 767.82 12549.64 00:08:35.917 PCIE (0000:00:11.0) NSID 1 from core 0: 4215.96 16.47 3795.12 788.50 11984.30 00:08:35.917 PCIE (0000:00:12.0) NSID 1 from core 0: 4215.96 16.47 3795.04 783.83 12459.47 00:08:35.917 PCIE (0000:00:12.0) NSID 2 from core 0: 4215.96 16.47 3794.99 817.08 14961.07 00:08:35.917 PCIE (0000:00:12.0) NSID 3 from core 0: 4215.96 16.47 3794.94 802.21 13763.01 00:08:35.917 ======================================================== 00:08:35.917 Total : 25295.73 98.81 3794.78 767.82 14961.07 00:08:35.917 00:08:37.836 Initializing NVMe Controllers 00:08:37.836 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.836 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.836 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.836 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.836 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:37.836 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:37.836 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:37.836 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:37.836 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:37.836 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:37.836 Initialization complete. Launching workers. 00:08:37.836 ======================================================== 00:08:37.836 Latency(us) 00:08:37.836 Device Information : IOPS MiB/s Average min max 00:08:37.836 PCIE (0000:00:13.0) NSID 1 from core 2: 2128.30 8.31 7516.94 881.17 31164.95 00:08:37.836 PCIE (0000:00:10.0) NSID 1 from core 2: 2128.30 8.31 7517.21 869.13 32415.70 00:08:37.836 PCIE (0000:00:11.0) NSID 1 from core 2: 2128.30 8.31 7518.03 884.89 32487.27 00:08:37.836 PCIE (0000:00:12.0) NSID 1 from core 2: 2128.30 8.31 7517.94 864.35 28992.61 00:08:37.836 PCIE (0000:00:12.0) NSID 2 from core 2: 2128.30 8.31 7512.14 880.16 37187.95 00:08:37.836 PCIE (0000:00:12.0) NSID 3 from core 2: 2128.30 8.31 7511.67 890.92 32889.44 00:08:37.836 ======================================================== 00:08:37.836 Total : 12769.82 49.88 7515.66 864.35 37187.95 00:08:37.836 00:08:37.836 19:58:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63812 00:08:37.836 19:58:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63813 00:08:37.836 00:08:37.836 real 0m10.781s 00:08:37.836 user 0m18.335s 00:08:37.836 sys 0m0.661s 00:08:37.836 19:58:11 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.836 19:58:11 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 ************************************ 00:08:37.836 END TEST nvme_multi_secondary 00:08:37.836 ************************************ 00:08:37.836 19:58:11 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:37.836 19:58:11 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62757 ]] 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1094 -- # kill 62757 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1095 -- # wait 62757 00:08:37.836 [2024-11-19 19:58:11.262254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.262316] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.262337] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.262351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.264749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.264805] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.264820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.264833] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.267163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.267240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.267257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.267269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.269728] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.269787] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.269801] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 [2024-11-19 19:58:11.269814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63685) is not found. Dropping the request. 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:37.836 19:58:11 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.836 19:58:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 ************************************ 00:08:37.836 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:37.836 ************************************ 00:08:37.836 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:37.836 * Looking for test storage... 00:08:37.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.837 --rc genhtml_branch_coverage=1 00:08:37.837 --rc genhtml_function_coverage=1 00:08:37.837 --rc genhtml_legend=1 00:08:37.837 --rc geninfo_all_blocks=1 00:08:37.837 --rc geninfo_unexecuted_blocks=1 00:08:37.837 00:08:37.837 ' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.837 --rc genhtml_branch_coverage=1 00:08:37.837 --rc genhtml_function_coverage=1 00:08:37.837 --rc genhtml_legend=1 00:08:37.837 --rc geninfo_all_blocks=1 00:08:37.837 --rc geninfo_unexecuted_blocks=1 00:08:37.837 00:08:37.837 ' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.837 --rc genhtml_branch_coverage=1 00:08:37.837 --rc genhtml_function_coverage=1 00:08:37.837 --rc genhtml_legend=1 00:08:37.837 --rc geninfo_all_blocks=1 00:08:37.837 --rc geninfo_unexecuted_blocks=1 00:08:37.837 00:08:37.837 ' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.837 --rc genhtml_branch_coverage=1 00:08:37.837 --rc genhtml_function_coverage=1 00:08:37.837 --rc genhtml_legend=1 00:08:37.837 --rc geninfo_all_blocks=1 00:08:37.837 --rc geninfo_unexecuted_blocks=1 00:08:37.837 00:08:37.837 ' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63981 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63981 00:08:37.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63981 ']' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.837 19:58:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.098 [2024-11-19 19:58:11.673216] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:38.098 [2024-11-19 19:58:11.673350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63981 ] 00:08:38.098 [2024-11-19 19:58:11.844006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.359 [2024-11-19 19:58:11.944759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.359 [2024-11-19 19:58:11.945331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.359 [2024-11-19 19:58:11.945657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.359 [2024-11-19 19:58:11.945735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.931 nvme0n1 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:38.931 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_C01PN.txt 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.932 true 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732046292 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64004 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:38.932 19:58:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 [2024-11-19 19:58:14.630136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:40.848 [2024-11-19 19:58:14.630732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:40.848 [2024-11-19 19:58:14.630840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:40.848 [2024-11-19 19:58:14.630894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.848 [2024-11-19 19:58:14.634162] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:40.848 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64004 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64004 00:08:40.848 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64004 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_C01PN.txt 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_C01PN.txt 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63981 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63981 ']' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63981 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63981 00:08:41.110 killing process with pid 63981 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63981' 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63981 00:08:41.110 19:58:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63981 00:08:42.489 19:58:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:42.489 19:58:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:42.489 00:08:42.489 real 0m4.763s 00:08:42.489 user 0m16.997s 00:08:42.489 sys 0m0.489s 00:08:42.489 19:58:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.489 19:58:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.489 ************************************ 00:08:42.489 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:42.489 ************************************ 00:08:42.489 19:58:16 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:42.489 19:58:16 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:42.489 19:58:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.489 19:58:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.489 19:58:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.489 ************************************ 00:08:42.489 START TEST nvme_fio 00:08:42.489 ************************************ 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.489 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:42.489 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:42.748 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:42.748 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.006 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.006 19:58:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.006 19:58:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.264 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.264 fio-3.35 00:08:43.264 Starting 1 thread 00:08:49.819 00:08:49.819 test: (groupid=0, jobs=1): err= 0: pid=64138: Tue Nov 19 19:58:23 2024 00:08:49.819 read: IOPS=25.2k, BW=98.5MiB/s (103MB/s)(197MiB/2001msec) 00:08:49.819 slat (nsec): min=4194, max=76500, avg=4880.49, stdev=1865.50 00:08:49.819 clat (usec): min=230, max=8177, avg=2534.91, stdev=683.79 00:08:49.819 lat (usec): min=234, max=8215, avg=2539.79, stdev=684.96 00:08:49.819 clat percentiles (usec): 00:08:49.819 | 1.00th=[ 1369], 5.00th=[ 1975], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:49.819 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:49.819 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2966], 95.00th=[ 3589], 00:08:49.819 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 6915], 99.95th=[ 7111], 00:08:49.819 | 99.99th=[ 7963] 00:08:49.819 bw ( KiB/s): min=96600, max=100632, per=98.14%, avg=98992.00, stdev=2118.58, samples=3 00:08:49.819 iops : min=24150, max=25158, avg=24748.00, stdev=529.65, samples=3 00:08:49.819 write: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(196MiB/2001msec); 0 zone resets 00:08:49.819 slat (usec): min=4, max=111, avg= 5.13, stdev= 1.89 00:08:49.819 clat (usec): min=209, max=8104, avg=2532.01, stdev=679.44 00:08:49.819 lat (usec): min=214, max=8122, avg=2537.14, stdev=680.59 00:08:49.819 clat percentiles (usec): 00:08:49.819 | 1.00th=[ 1352], 5.00th=[ 1958], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:49.819 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:49.819 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2966], 95.00th=[ 3589], 00:08:49.819 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 6849], 99.95th=[ 7177], 00:08:49.819 | 99.99th=[ 7898] 00:08:49.819 bw ( KiB/s): min=96968, max=100552, per=98.70%, avg=99069.33, stdev=1870.38, samples=3 00:08:49.819 iops : min=24242, max=25138, avg=24767.33, stdev=467.60, samples=3 00:08:49.819 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.13% 00:08:49.819 lat (msec) : 2=5.24%, 4=90.38%, 10=4.20% 00:08:49.819 cpu : usr=99.25%, sys=0.10%, ctx=2, majf=0, minf=607 00:08:49.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:49.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.819 issued rwts: total=50460,50213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.819 00:08:49.819 Run status group 0 (all jobs): 00:08:49.819 READ: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=197MiB (207MB), run=2001-2001msec 00:08:49.819 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=196MiB (206MB), run=2001-2001msec 00:08:49.819 ----------------------------------------------------- 00:08:49.819 Suppressions used: 00:08:49.819 count bytes template 00:08:49.819 1 32 /usr/src/fio/parse.c 00:08:49.819 1 8 libtcmalloc_minimal.so 00:08:49.819 ----------------------------------------------------- 00:08:49.819 00:08:49.819 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:49.819 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:49.819 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:49.819 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.078 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.078 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:50.336 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.336 19:58:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.336 19:58:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:50.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:50.336 fio-3.35 00:08:50.336 Starting 1 thread 00:08:56.891 00:08:56.891 test: (groupid=0, jobs=1): err= 0: pid=64199: Tue Nov 19 19:58:30 2024 00:08:56.891 read: IOPS=24.7k, BW=96.3MiB/s (101MB/s)(193MiB/2001msec) 00:08:56.891 slat (nsec): min=4225, max=61670, avg=4933.86, stdev=2042.60 00:08:56.891 clat (usec): min=204, max=9491, avg=2593.11, stdev=770.22 00:08:56.891 lat (usec): min=209, max=9524, avg=2598.04, stdev=771.58 00:08:56.891 clat percentiles (usec): 00:08:56.891 | 1.00th=[ 1647], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:56.891 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:56.891 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2835], 95.00th=[ 4424], 00:08:56.891 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7308], 00:08:56.891 | 99.99th=[ 9372] 00:08:56.891 bw ( KiB/s): min=92344, max=99616, per=98.22%, avg=96850.67, stdev=3936.33, samples=3 00:08:56.891 iops : min=23086, max=24904, avg=24212.67, stdev=984.08, samples=3 00:08:56.891 write: IOPS=24.5k, BW=95.7MiB/s (100MB/s)(191MiB/2001msec); 0 zone resets 00:08:56.891 slat (nsec): min=4292, max=69989, avg=5174.93, stdev=2069.73 00:08:56.891 clat (usec): min=224, max=9429, avg=2595.37, stdev=773.76 00:08:56.891 lat (usec): min=229, max=9443, avg=2600.54, stdev=775.11 00:08:56.891 clat percentiles (usec): 00:08:56.891 | 1.00th=[ 1647], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:56.891 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:56.891 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2835], 95.00th=[ 4490], 00:08:56.891 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7570], 00:08:56.891 | 99.99th=[ 9241] 00:08:56.891 bw ( KiB/s): min=92192, max=100920, per=98.98%, avg=96989.33, stdev=4428.07, samples=3 00:08:56.891 iops : min=23048, max=25230, avg=24247.33, stdev=1107.02, samples=3 00:08:56.891 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.05% 00:08:56.891 lat (msec) : 2=2.99%, 4=90.97%, 10=5.97% 00:08:56.891 cpu : usr=99.20%, sys=0.10%, ctx=17, majf=0, minf=607 00:08:56.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:56.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.891 issued rwts: total=49328,49018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.891 00:08:56.891 Run status group 0 (all jobs): 00:08:56.891 READ: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:08:56.891 WRITE: bw=95.7MiB/s (100MB/s), 95.7MiB/s-95.7MiB/s (100MB/s-100MB/s), io=191MiB (201MB), run=2001-2001msec 00:08:56.891 ----------------------------------------------------- 00:08:56.891 Suppressions used: 00:08:56.891 count bytes template 00:08:56.891 1 32 /usr/src/fio/parse.c 00:08:56.891 1 8 libtcmalloc_minimal.so 00:08:56.891 ----------------------------------------------------- 00:08:56.891 00:08:56.891 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:56.891 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.891 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:56.891 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:57.149 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:57.149 19:58:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.408 19:58:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.408 19:58:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.408 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.409 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.409 19:58:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.666 fio-3.35 00:08:57.666 Starting 1 thread 00:09:04.263 00:09:04.263 test: (groupid=0, jobs=1): err= 0: pid=64255: Tue Nov 19 19:58:37 2024 00:09:04.263 read: IOPS=21.1k, BW=82.4MiB/s (86.4MB/s)(165MiB/2001msec) 00:09:04.263 slat (usec): min=3, max=183, avg= 5.22, stdev= 2.68 00:09:04.263 clat (usec): min=215, max=9509, avg=3024.43, stdev=1010.45 00:09:04.263 lat (usec): min=220, max=9514, avg=3029.64, stdev=1011.52 00:09:04.263 clat percentiles (usec): 00:09:04.263 | 1.00th=[ 1844], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:04.263 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:09:04.263 | 70.00th=[ 3064], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5211], 00:09:04.263 | 99.00th=[ 6587], 99.50th=[ 7046], 99.90th=[ 8094], 99.95th=[ 8455], 00:09:04.263 | 99.99th=[ 9372] 00:09:04.263 bw ( KiB/s): min=76984, max=85496, per=96.78%, avg=81650.67, stdev=4315.03, samples=3 00:09:04.263 iops : min=19246, max=21374, avg=20412.67, stdev=1078.76, samples=3 00:09:04.263 write: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec); 0 zone resets 00:09:04.263 slat (nsec): min=3512, max=64300, avg=5317.94, stdev=2337.45 00:09:04.263 clat (usec): min=230, max=9675, avg=3037.44, stdev=1017.09 00:09:04.263 lat (usec): min=235, max=9689, avg=3042.75, stdev=1018.12 00:09:04.263 clat percentiles (usec): 00:09:04.263 | 1.00th=[ 1844], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:04.263 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2835], 00:09:04.263 | 70.00th=[ 3064], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5211], 00:09:04.263 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 8029], 99.95th=[ 8356], 00:09:04.263 | 99.99th=[ 9241] 00:09:04.263 bw ( KiB/s): min=77424, max=85216, per=97.55%, avg=81800.00, stdev=3983.72, samples=3 00:09:04.263 iops : min=19356, max=21304, avg=20450.00, stdev=995.93, samples=3 00:09:04.263 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:09:04.263 lat (msec) : 2=1.99%, 4=82.48%, 10=15.45% 00:09:04.263 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:04.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.263 issued rwts: total=42203,41947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.263 00:09:04.263 Run status group 0 (all jobs): 00:09:04.263 READ: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:04.263 WRITE: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:04.263 ----------------------------------------------------- 00:09:04.263 Suppressions used: 00:09:04.263 count bytes template 00:09:04.263 1 32 /usr/src/fio/parse.c 00:09:04.263 1 8 libtcmalloc_minimal.so 00:09:04.263 ----------------------------------------------------- 00:09:04.263 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.263 19:58:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:04.263 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.264 19:58:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.524 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.524 fio-3.35 00:09:04.524 Starting 1 thread 00:09:14.498 00:09:14.498 test: (groupid=0, jobs=1): err= 0: pid=64316: Tue Nov 19 19:58:46 2024 00:09:14.498 read: IOPS=20.9k, BW=81.7MiB/s (85.7MB/s)(163MiB/2001msec) 00:09:14.498 slat (usec): min=3, max=111, avg= 5.34, stdev= 2.61 00:09:14.498 clat (usec): min=205, max=9911, avg=3051.03, stdev=1094.85 00:09:14.498 lat (usec): min=209, max=9925, avg=3056.37, stdev=1096.23 00:09:14.498 clat percentiles (usec): 00:09:14.498 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:14.498 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2769], 00:09:14.498 | 70.00th=[ 2999], 80.00th=[ 3556], 90.00th=[ 4752], 95.00th=[ 5604], 00:09:14.498 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 8029], 00:09:14.498 | 99.99th=[ 9503] 00:09:14.498 bw ( KiB/s): min=80608, max=88870, per=100.00%, avg=84071.33, stdev=4289.81, samples=3 00:09:14.498 iops : min=20152, max=22217, avg=21017.67, stdev=1072.17, samples=3 00:09:14.498 write: IOPS=20.8k, BW=81.3MiB/s (85.3MB/s)(163MiB/2001msec); 0 zone resets 00:09:14.498 slat (nsec): min=3391, max=71560, avg=5488.19, stdev=2592.08 00:09:14.498 clat (usec): min=251, max=9709, avg=3056.35, stdev=1085.93 00:09:14.498 lat (usec): min=255, max=9714, avg=3061.84, stdev=1087.29 00:09:14.498 clat percentiles (usec): 00:09:14.498 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:14.499 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2769], 00:09:14.499 | 70.00th=[ 2999], 80.00th=[ 3556], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:14.499 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7832], 99.95th=[ 7963], 00:09:14.499 | 99.99th=[ 9110] 00:09:14.499 bw ( KiB/s): min=81088, max=88279, per=100.00%, avg=84127.67, stdev=3722.16, samples=3 00:09:14.499 iops : min=20272, max=22069, avg=21031.67, stdev=930.12, samples=3 00:09:14.499 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.05% 00:09:14.499 lat (msec) : 2=1.70%, 4=82.58%, 10=15.64% 00:09:14.499 cpu : usr=99.05%, sys=0.15%, ctx=3, majf=0, minf=606 00:09:14.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.499 issued rwts: total=41850,41657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.499 00:09:14.499 Run status group 0 (all jobs): 00:09:14.499 READ: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:14.499 WRITE: bw=81.3MiB/s (85.3MB/s), 81.3MiB/s-81.3MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:14.499 ----------------------------------------------------- 00:09:14.499 Suppressions used: 00:09:14.499 count bytes template 00:09:14.499 1 32 /usr/src/fio/parse.c 00:09:14.499 1 8 libtcmalloc_minimal.so 00:09:14.499 ----------------------------------------------------- 00:09:14.499 00:09:14.499 ************************************ 00:09:14.499 END TEST nvme_fio 00:09:14.499 ************************************ 00:09:14.499 19:58:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.499 19:58:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:14.499 00:09:14.499 real 0m30.711s 00:09:14.499 user 0m20.734s 00:09:14.499 sys 0m16.834s 00:09:14.499 19:58:46 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.499 19:58:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:14.499 ************************************ 00:09:14.499 END TEST nvme 00:09:14.499 ************************************ 00:09:14.499 00:09:14.499 real 1m41.778s 00:09:14.499 user 3m44.699s 00:09:14.499 sys 0m27.741s 00:09:14.499 19:58:46 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.499 19:58:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.499 19:58:47 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:14.499 19:58:47 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.499 19:58:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.499 19:58:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.499 19:58:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.499 ************************************ 00:09:14.499 START TEST nvme_scc 00:09:14.499 ************************************ 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.499 * Looking for test storage... 00:09:14.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.499 --rc genhtml_branch_coverage=1 00:09:14.499 --rc genhtml_function_coverage=1 00:09:14.499 --rc genhtml_legend=1 00:09:14.499 --rc geninfo_all_blocks=1 00:09:14.499 --rc geninfo_unexecuted_blocks=1 00:09:14.499 00:09:14.499 ' 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.499 --rc genhtml_branch_coverage=1 00:09:14.499 --rc genhtml_function_coverage=1 00:09:14.499 --rc genhtml_legend=1 00:09:14.499 --rc geninfo_all_blocks=1 00:09:14.499 --rc geninfo_unexecuted_blocks=1 00:09:14.499 00:09:14.499 ' 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.499 --rc genhtml_branch_coverage=1 00:09:14.499 --rc genhtml_function_coverage=1 00:09:14.499 --rc genhtml_legend=1 00:09:14.499 --rc geninfo_all_blocks=1 00:09:14.499 --rc geninfo_unexecuted_blocks=1 00:09:14.499 00:09:14.499 ' 00:09:14.499 19:58:47 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.499 --rc genhtml_branch_coverage=1 00:09:14.499 --rc genhtml_function_coverage=1 00:09:14.499 --rc genhtml_legend=1 00:09:14.499 --rc geninfo_all_blocks=1 00:09:14.499 --rc geninfo_unexecuted_blocks=1 00:09:14.499 00:09:14.499 ' 00:09:14.499 19:58:47 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.499 19:58:47 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.499 19:58:47 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.499 19:58:47 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.499 19:58:47 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.499 19:58:47 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:14.499 19:58:47 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:14.499 19:58:47 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:14.499 19:58:47 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.499 19:58:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:14.499 19:58:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:14.499 19:58:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:14.500 19:58:47 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:14.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.500 Waiting for block devices as requested 00:09:14.500 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.500 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.500 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.500 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.838 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:19.838 19:58:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:19.838 19:58:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.838 19:58:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:19.838 19:58:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.838 19:58:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:19.838 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.839 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:19.840 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.841 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.842 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:19.843 19:58:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.843 19:58:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:19.843 19:58:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.843 19:58:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.843 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:19.844 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.845 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.846 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:19.847 19:58:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:19.847 19:58:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.847 19:58:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:19.848 19:58:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.848 19:58:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:19.853 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:19.854 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:19.855 19:58:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.855 19:58:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:19.855 19:58:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.855 19:58:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 19:58:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:19.857 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 19:58:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:19.859 19:58:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:19.859 19:58:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:19.859 19:58:53 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.692 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.692 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.692 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.692 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.692 19:58:54 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:20.692 19:58:54 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.692 19:58:54 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.692 19:58:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:20.692 ************************************ 00:09:20.692 START TEST nvme_simple_copy 00:09:20.692 ************************************ 00:09:20.692 19:58:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:20.953 Initializing NVMe Controllers 00:09:20.953 Attaching to 0000:00:10.0 00:09:20.953 Controller supports SCC. Attached to 0000:00:10.0 00:09:20.953 Namespace ID: 1 size: 6GB 00:09:20.953 Initialization complete. 00:09:20.953 00:09:20.953 Controller QEMU NVMe Ctrl (12340 ) 00:09:20.953 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:20.953 Namespace Block Size:4096 00:09:20.953 Writing LBAs 0 to 63 with Random Data 00:09:20.953 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:20.953 LBAs matching Written Data: 64 00:09:20.953 00:09:20.953 real 0m0.276s 00:09:20.953 user 0m0.115s 00:09:20.953 sys 0m0.060s 00:09:20.953 19:58:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.953 19:58:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:20.953 ************************************ 00:09:20.953 END TEST nvme_simple_copy 00:09:20.953 ************************************ 00:09:21.213 00:09:21.213 real 0m7.746s 00:09:21.213 user 0m1.005s 00:09:21.213 sys 0m1.285s 00:09:21.213 19:58:54 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.213 ************************************ 00:09:21.213 END TEST nvme_scc 00:09:21.213 ************************************ 00:09:21.213 19:58:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:21.213 19:58:54 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:21.213 19:58:54 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:21.213 19:58:54 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:21.213 19:58:54 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:21.213 19:58:54 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:21.213 19:58:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.213 19:58:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.213 19:58:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.213 ************************************ 00:09:21.213 START TEST nvme_fdp 00:09:21.213 ************************************ 00:09:21.213 19:58:54 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:21.213 * Looking for test storage... 00:09:21.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.213 19:58:54 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.213 19:58:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.213 19:58:54 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.213 19:58:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.213 19:58:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:21.214 19:58:54 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.214 19:58:54 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.214 --rc genhtml_branch_coverage=1 00:09:21.214 --rc genhtml_function_coverage=1 00:09:21.214 --rc genhtml_legend=1 00:09:21.214 --rc geninfo_all_blocks=1 00:09:21.214 --rc geninfo_unexecuted_blocks=1 00:09:21.214 00:09:21.214 ' 00:09:21.214 19:58:54 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.214 --rc genhtml_branch_coverage=1 00:09:21.214 --rc genhtml_function_coverage=1 00:09:21.214 --rc genhtml_legend=1 00:09:21.214 --rc geninfo_all_blocks=1 00:09:21.214 --rc geninfo_unexecuted_blocks=1 00:09:21.214 00:09:21.214 ' 00:09:21.214 19:58:54 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.214 --rc genhtml_branch_coverage=1 00:09:21.214 --rc genhtml_function_coverage=1 00:09:21.214 --rc genhtml_legend=1 00:09:21.214 --rc geninfo_all_blocks=1 00:09:21.214 --rc geninfo_unexecuted_blocks=1 00:09:21.214 00:09:21.214 ' 00:09:21.214 19:58:54 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.214 --rc genhtml_branch_coverage=1 00:09:21.214 --rc genhtml_function_coverage=1 00:09:21.214 --rc genhtml_legend=1 00:09:21.214 --rc geninfo_all_blocks=1 00:09:21.214 --rc geninfo_unexecuted_blocks=1 00:09:21.214 00:09:21.214 ' 00:09:21.214 19:58:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.214 19:58:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.214 19:58:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.214 19:58:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.214 19:58:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.214 19:58:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:21.214 19:58:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:21.214 19:58:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:21.214 19:58:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.214 19:58:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.736 Waiting for block devices as requested 00:09:21.736 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.736 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.997 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.997 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.296 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.296 19:59:00 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:27.296 19:59:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.296 19:59:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:27.296 19:59:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.296 19:59:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.296 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.297 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:27.298 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.299 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.300 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:27.301 19:59:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.301 19:59:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:27.301 19:59:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.301 19:59:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.301 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:27.302 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.303 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:27.304 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:27.305 19:59:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.305 19:59:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:27.305 19:59:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.305 19:59:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.305 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:27.306 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.307 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.308 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.309 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:27.310 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.311 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.312 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:27.313 19:59:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.313 19:59:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:27.313 19:59:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.313 19:59:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:27.313 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.314 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:27.315 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:27.316 19:59:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:27.316 19:59:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:27.316 19:59:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:27.317 19:59:01 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:27.317 19:59:01 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:27.317 19:59:01 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.459 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.459 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.459 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.459 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.459 19:59:02 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:28.459 19:59:02 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:28.459 19:59:02 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.459 19:59:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:28.459 ************************************ 00:09:28.459 START TEST nvme_flexible_data_placement 00:09:28.459 ************************************ 00:09:28.459 19:59:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:28.721 Initializing NVMe Controllers 00:09:28.721 Attaching to 0000:00:13.0 00:09:28.721 Controller supports FDP Attached to 0000:00:13.0 00:09:28.721 Namespace ID: 1 Endurance Group ID: 1 00:09:28.721 Initialization complete. 00:09:28.721 00:09:28.721 ================================== 00:09:28.721 == FDP tests for Namespace: #01 == 00:09:28.721 ================================== 00:09:28.721 00:09:28.721 Get Feature: FDP: 00:09:28.721 ================= 00:09:28.721 Enabled: Yes 00:09:28.721 FDP configuration Index: 0 00:09:28.721 00:09:28.721 FDP configurations log page 00:09:28.721 =========================== 00:09:28.721 Number of FDP configurations: 1 00:09:28.721 Version: 0 00:09:28.721 Size: 112 00:09:28.721 FDP Configuration Descriptor: 0 00:09:28.721 Descriptor Size: 96 00:09:28.721 Reclaim Group Identifier format: 2 00:09:28.721 FDP Volatile Write Cache: Not Present 00:09:28.721 FDP Configuration: Valid 00:09:28.721 Vendor Specific Size: 0 00:09:28.721 Number of Reclaim Groups: 2 00:09:28.721 Number of Recalim Unit Handles: 8 00:09:28.721 Max Placement Identifiers: 128 00:09:28.721 Number of Namespaces Suppprted: 256 00:09:28.721 Reclaim unit Nominal Size: 6000000 bytes 00:09:28.721 Estimated Reclaim Unit Time Limit: Not Reported 00:09:28.721 RUH Desc #000: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #001: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #002: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #003: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #004: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #005: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #006: RUH Type: Initially Isolated 00:09:28.721 RUH Desc #007: RUH Type: Initially Isolated 00:09:28.721 00:09:28.721 FDP reclaim unit handle usage log page 00:09:28.721 ====================================== 00:09:28.721 Number of Reclaim Unit Handles: 8 00:09:28.721 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:28.721 RUH Usage Desc #001: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #002: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #003: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #004: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #005: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #006: RUH Attributes: Unused 00:09:28.721 RUH Usage Desc #007: RUH Attributes: Unused 00:09:28.721 00:09:28.721 FDP statistics log page 00:09:28.721 ======================= 00:09:28.721 Host bytes with metadata written: 913719296 00:09:28.721 Media bytes with metadata written: 913960960 00:09:28.721 Media bytes erased: 0 00:09:28.721 00:09:28.721 FDP Reclaim unit handle status 00:09:28.721 ============================== 00:09:28.721 Number of RUHS descriptors: 2 00:09:28.721 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000589c 00:09:28.721 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:28.721 00:09:28.721 FDP write on placement id: 0 success 00:09:28.721 00:09:28.721 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:28.721 00:09:28.721 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:28.721 00:09:28.721 Get Feature: FDP Events for Placement handle: #0 00:09:28.721 ======================== 00:09:28.721 Number of FDP Events: 6 00:09:28.721 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:28.721 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:28.721 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:28.721 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:28.721 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:28.721 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:28.721 00:09:28.721 FDP events log page 00:09:28.721 =================== 00:09:28.721 Number of FDP events: 1 00:09:28.721 FDP Event #0: 00:09:28.721 Event Type: RU Not Written to Capacity 00:09:28.721 Placement Identifier: Valid 00:09:28.721 NSID: Valid 00:09:28.721 Location: Valid 00:09:28.721 Placement Identifier: 0 00:09:28.721 Event Timestamp: 6 00:09:28.721 Namespace Identifier: 1 00:09:28.721 Reclaim Group Identifier: 0 00:09:28.721 Reclaim Unit Handle Identifier: 0 00:09:28.721 00:09:28.721 FDP test passed 00:09:28.721 00:09:28.721 real 0m0.234s 00:09:28.721 user 0m0.075s 00:09:28.721 sys 0m0.059s 00:09:28.721 19:59:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.721 ************************************ 00:09:28.721 END TEST nvme_flexible_data_placement 00:09:28.721 ************************************ 00:09:28.721 19:59:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:28.721 00:09:28.721 real 0m7.602s 00:09:28.721 user 0m0.984s 00:09:28.721 sys 0m1.385s 00:09:28.721 19:59:02 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.721 ************************************ 00:09:28.721 END TEST nvme_fdp 00:09:28.721 ************************************ 00:09:28.721 19:59:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:28.721 19:59:02 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:28.721 19:59:02 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:28.721 19:59:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.721 19:59:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.721 19:59:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.721 ************************************ 00:09:28.721 START TEST nvme_rpc 00:09:28.721 ************************************ 00:09:28.721 19:59:02 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:28.981 * Looking for test storage... 00:09:28.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.981 19:59:02 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.981 --rc genhtml_branch_coverage=1 00:09:28.981 --rc genhtml_function_coverage=1 00:09:28.981 --rc genhtml_legend=1 00:09:28.981 --rc geninfo_all_blocks=1 00:09:28.981 --rc geninfo_unexecuted_blocks=1 00:09:28.981 00:09:28.981 ' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.981 --rc genhtml_branch_coverage=1 00:09:28.981 --rc genhtml_function_coverage=1 00:09:28.981 --rc genhtml_legend=1 00:09:28.981 --rc geninfo_all_blocks=1 00:09:28.981 --rc geninfo_unexecuted_blocks=1 00:09:28.981 00:09:28.981 ' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.981 --rc genhtml_branch_coverage=1 00:09:28.981 --rc genhtml_function_coverage=1 00:09:28.981 --rc genhtml_legend=1 00:09:28.981 --rc geninfo_all_blocks=1 00:09:28.981 --rc geninfo_unexecuted_blocks=1 00:09:28.981 00:09:28.981 ' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.981 --rc genhtml_branch_coverage=1 00:09:28.981 --rc genhtml_function_coverage=1 00:09:28.981 --rc genhtml_legend=1 00:09:28.981 --rc geninfo_all_blocks=1 00:09:28.981 --rc geninfo_unexecuted_blocks=1 00:09:28.981 00:09:28.981 ' 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65678 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:28.981 19:59:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65678 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65678 ']' 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.981 19:59:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.981 [2024-11-19 19:59:02.755338] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:28.981 [2024-11-19 19:59:02.755451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65678 ] 00:09:29.239 [2024-11-19 19:59:02.913161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.239 [2024-11-19 19:59:03.010681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.239 [2024-11-19 19:59:03.010847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.823 19:59:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.823 19:59:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:29.823 19:59:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:30.080 Nvme0n1 00:09:30.080 19:59:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:30.080 19:59:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:30.339 request: 00:09:30.339 { 00:09:30.339 "bdev_name": "Nvme0n1", 00:09:30.339 "filename": "non_existing_file", 00:09:30.339 "method": "bdev_nvme_apply_firmware", 00:09:30.339 "req_id": 1 00:09:30.339 } 00:09:30.339 Got JSON-RPC error response 00:09:30.339 response: 00:09:30.339 { 00:09:30.339 "code": -32603, 00:09:30.339 "message": "open file failed." 00:09:30.339 } 00:09:30.339 19:59:04 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:30.339 19:59:04 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:30.339 19:59:04 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:30.596 19:59:04 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:30.596 19:59:04 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65678 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65678 ']' 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65678 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65678 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.596 killing process with pid 65678 00:09:30.596 19:59:04 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65678' 00:09:30.597 19:59:04 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65678 00:09:30.597 19:59:04 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65678 00:09:31.971 ************************************ 00:09:31.971 END TEST nvme_rpc 00:09:31.971 ************************************ 00:09:31.971 00:09:31.971 real 0m2.984s 00:09:31.971 user 0m5.681s 00:09:31.971 sys 0m0.485s 00:09:31.971 19:59:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.971 19:59:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.971 19:59:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.971 19:59:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.971 19:59:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.971 19:59:05 -- common/autotest_common.sh@10 -- # set +x 00:09:31.971 ************************************ 00:09:31.971 START TEST nvme_rpc_timeouts 00:09:31.971 ************************************ 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.971 * Looking for test storage... 00:09:31.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.971 19:59:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.971 --rc genhtml_branch_coverage=1 00:09:31.971 --rc genhtml_function_coverage=1 00:09:31.971 --rc genhtml_legend=1 00:09:31.971 --rc geninfo_all_blocks=1 00:09:31.971 --rc geninfo_unexecuted_blocks=1 00:09:31.971 00:09:31.971 ' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.971 --rc genhtml_branch_coverage=1 00:09:31.971 --rc genhtml_function_coverage=1 00:09:31.971 --rc genhtml_legend=1 00:09:31.971 --rc geninfo_all_blocks=1 00:09:31.971 --rc geninfo_unexecuted_blocks=1 00:09:31.971 00:09:31.971 ' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.971 --rc genhtml_branch_coverage=1 00:09:31.971 --rc genhtml_function_coverage=1 00:09:31.971 --rc genhtml_legend=1 00:09:31.971 --rc geninfo_all_blocks=1 00:09:31.971 --rc geninfo_unexecuted_blocks=1 00:09:31.971 00:09:31.971 ' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.971 --rc genhtml_branch_coverage=1 00:09:31.971 --rc genhtml_function_coverage=1 00:09:31.971 --rc genhtml_legend=1 00:09:31.971 --rc geninfo_all_blocks=1 00:09:31.971 --rc geninfo_unexecuted_blocks=1 00:09:31.971 00:09:31.971 ' 00:09:31.971 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.971 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65738 00:09:31.971 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65738 00:09:31.971 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65770 00:09:31.972 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:31.972 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65770 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65770 ']' 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.972 19:59:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.972 19:59:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 [2024-11-19 19:59:05.707171] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:31.972 [2024-11-19 19:59:05.707286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65770 ] 00:09:32.230 [2024-11-19 19:59:05.858970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.230 [2024-11-19 19:59:05.958939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.230 [2024-11-19 19:59:05.959018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.798 Checking default timeout settings: 00:09:32.798 19:59:06 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.798 19:59:06 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:32.798 19:59:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:32.798 19:59:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:33.364 Making settings changes with rpc: 00:09:33.364 19:59:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:33.364 19:59:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:33.364 Check default vs. modified settings: 00:09:33.364 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:33.364 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65738 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65738 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:33.622 Setting action_on_timeout is changed as expected. 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65738 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65738 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:33.622 Setting timeout_us is changed as expected. 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:33.622 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65738 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65738 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:33.880 Setting timeout_admin_us is changed as expected. 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65738 /tmp/settings_modified_65738 00:09:33.880 19:59:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65770 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65770 ']' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65770 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65770 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.880 killing process with pid 65770 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65770' 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65770 00:09:33.880 19:59:07 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65770 00:09:35.258 RPC TIMEOUT SETTING TEST PASSED. 00:09:35.258 19:59:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:35.258 00:09:35.258 real 0m3.184s 00:09:35.258 user 0m6.134s 00:09:35.258 sys 0m0.540s 00:09:35.258 19:59:08 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.258 ************************************ 00:09:35.258 END TEST nvme_rpc_timeouts 00:09:35.258 ************************************ 00:09:35.258 19:59:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:35.258 19:59:08 -- spdk/autotest.sh@239 -- # uname -s 00:09:35.258 19:59:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:35.258 19:59:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:35.258 19:59:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.258 19:59:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.258 19:59:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.258 ************************************ 00:09:35.258 START TEST sw_hotplug 00:09:35.258 ************************************ 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:35.258 * Looking for test storage... 00:09:35.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.258 19:59:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.258 --rc genhtml_branch_coverage=1 00:09:35.258 --rc genhtml_function_coverage=1 00:09:35.258 --rc genhtml_legend=1 00:09:35.258 --rc geninfo_all_blocks=1 00:09:35.258 --rc geninfo_unexecuted_blocks=1 00:09:35.258 00:09:35.258 ' 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.258 --rc genhtml_branch_coverage=1 00:09:35.258 --rc genhtml_function_coverage=1 00:09:35.258 --rc genhtml_legend=1 00:09:35.258 --rc geninfo_all_blocks=1 00:09:35.258 --rc geninfo_unexecuted_blocks=1 00:09:35.258 00:09:35.258 ' 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.258 --rc genhtml_branch_coverage=1 00:09:35.258 --rc genhtml_function_coverage=1 00:09:35.258 --rc genhtml_legend=1 00:09:35.258 --rc geninfo_all_blocks=1 00:09:35.258 --rc geninfo_unexecuted_blocks=1 00:09:35.258 00:09:35.258 ' 00:09:35.258 19:59:08 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.259 --rc genhtml_branch_coverage=1 00:09:35.259 --rc genhtml_function_coverage=1 00:09:35.259 --rc genhtml_legend=1 00:09:35.259 --rc geninfo_all_blocks=1 00:09:35.259 --rc geninfo_unexecuted_blocks=1 00:09:35.259 00:09:35.259 ' 00:09:35.259 19:59:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.779 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.779 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.779 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.779 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.779 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:35.779 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:35.779 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:35.779 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.779 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:35.780 19:59:09 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.780 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:35.780 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:35.780 19:59:09 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.302 Waiting for block devices as requested 00:09:36.302 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.302 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.302 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.564 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.890 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.890 19:59:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:41.890 19:59:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.890 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:42.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.148 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:42.408 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:42.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66627 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:42.670 19:59:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:42.670 19:59:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:42.931 Initializing NVMe Controllers 00:09:42.931 Attaching to 0000:00:10.0 00:09:42.931 Attaching to 0000:00:11.0 00:09:42.931 Attached to 0000:00:11.0 00:09:42.931 Attached to 0000:00:10.0 00:09:42.931 Initialization complete. Starting I/O... 00:09:42.931 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:42.931 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:42.931 00:09:43.874 QEMU NVMe Ctrl (12341 ): 2016 I/Os completed (+2016) 00:09:43.874 QEMU NVMe Ctrl (12340 ): 2012 I/Os completed (+2012) 00:09:43.874 00:09:44.821 QEMU NVMe Ctrl (12341 ): 4524 I/Os completed (+2508) 00:09:44.821 QEMU NVMe Ctrl (12340 ): 4538 I/Os completed (+2526) 00:09:44.821 00:09:46.209 QEMU NVMe Ctrl (12341 ): 7448 I/Os completed (+2924) 00:09:46.209 QEMU NVMe Ctrl (12340 ): 7463 I/Os completed (+2925) 00:09:46.209 00:09:47.154 QEMU NVMe Ctrl (12341 ): 10672 I/Os completed (+3224) 00:09:47.154 QEMU NVMe Ctrl (12340 ): 10687 I/Os completed (+3224) 00:09:47.154 00:09:48.090 QEMU NVMe Ctrl (12341 ): 13935 I/Os completed (+3263) 00:09:48.090 QEMU NVMe Ctrl (12340 ): 13846 I/Os completed (+3159) 00:09:48.090 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.656 [2024-11-19 19:59:22.373969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:48.656 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:48.656 [2024-11-19 19:59:22.375432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.375487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.375507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.375526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:48.656 [2024-11-19 19:59:22.377577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.377694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.377730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.377807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:48.656 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.656 [2024-11-19 19:59:22.404661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:48.656 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:48.656 [2024-11-19 19:59:22.406323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.406365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.406387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.406403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:48.656 [2024-11-19 19:59:22.408083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.408112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.408126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.656 [2024-11-19 19:59:22.408140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.914 Attaching to 0000:00:10.0 00:09:48.914 Attached to 0000:00:10.0 00:09:48.914 QEMU NVMe Ctrl (12340 ): 20 I/Os completed (+20) 00:09:48.914 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.914 19:59:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.914 Attaching to 0000:00:11.0 00:09:48.914 Attached to 0000:00:11.0 00:09:49.847 QEMU NVMe Ctrl (12340 ): 4199 I/Os completed (+4179) 00:09:49.847 QEMU NVMe Ctrl (12341 ): 4558 I/Os completed (+4558) 00:09:49.847 00:09:51.225 QEMU NVMe Ctrl (12340 ): 7628 I/Os completed (+3429) 00:09:51.225 QEMU NVMe Ctrl (12341 ): 8418 I/Os completed (+3860) 00:09:51.225 00:09:51.797 QEMU NVMe Ctrl (12340 ): 11382 I/Os completed (+3754) 00:09:51.797 QEMU NVMe Ctrl (12341 ): 12069 I/Os completed (+3651) 00:09:51.797 00:09:53.180 QEMU NVMe Ctrl (12340 ): 15187 I/Os completed (+3805) 00:09:53.180 QEMU NVMe Ctrl (12341 ): 15851 I/Os completed (+3782) 00:09:53.180 00:09:54.123 QEMU NVMe Ctrl (12340 ): 18565 I/Os completed (+3378) 00:09:54.123 QEMU NVMe Ctrl (12341 ): 19179 I/Os completed (+3328) 00:09:54.123 00:09:55.068 QEMU NVMe Ctrl (12340 ): 21185 I/Os completed (+2620) 00:09:55.068 QEMU NVMe Ctrl (12341 ): 21800 I/Os completed (+2621) 00:09:55.068 00:09:56.021 QEMU NVMe Ctrl (12340 ): 23741 I/Os completed (+2556) 00:09:56.021 QEMU NVMe Ctrl (12341 ): 24366 I/Os completed (+2566) 00:09:56.021 00:09:56.961 QEMU NVMe Ctrl (12340 ): 26573 I/Os completed (+2832) 00:09:56.961 QEMU NVMe Ctrl (12341 ): 27198 I/Os completed (+2832) 00:09:56.961 00:09:57.905 QEMU NVMe Ctrl (12340 ): 29291 I/Os completed (+2718) 00:09:57.905 QEMU NVMe Ctrl (12341 ): 29913 I/Os completed (+2715) 00:09:57.905 00:09:58.856 QEMU NVMe Ctrl (12340 ): 31979 I/Os completed (+2688) 00:09:58.857 QEMU NVMe Ctrl (12341 ): 32638 I/Os completed (+2725) 00:09:58.857 00:09:59.792 QEMU NVMe Ctrl (12340 ): 35283 I/Os completed (+3304) 00:09:59.792 QEMU NVMe Ctrl (12341 ): 35722 I/Os completed (+3084) 00:09:59.792 00:10:01.167 QEMU NVMe Ctrl (12340 ): 38697 I/Os completed (+3414) 00:10:01.167 QEMU NVMe Ctrl (12341 ): 38798 I/Os completed (+3076) 00:10:01.167 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.167 [2024-11-19 19:59:34.637677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:01.167 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:01.167 [2024-11-19 19:59:34.638953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.639089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.639135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.639252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:01.167 [2024-11-19 19:59:34.641149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.641199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.641213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.641239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.167 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.167 [2024-11-19 19:59:34.658578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:01.167 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:01.167 [2024-11-19 19:59:34.659620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.167 [2024-11-19 19:59:34.659659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 [2024-11-19 19:59:34.659678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 [2024-11-19 19:59:34.659693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:01.168 [2024-11-19 19:59:34.661348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 [2024-11-19 19:59:34.661376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 [2024-11-19 19:59:34.661391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 [2024-11-19 19:59:34.661405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.168 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:01.168 EAL: Scan for (pci) bus failed. 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:01.168 Attaching to 0000:00:10.0 00:10:01.168 Attached to 0000:00:10.0 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.168 19:59:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:01.168 Attaching to 0000:00:11.0 00:10:01.168 Attached to 0000:00:11.0 00:10:02.103 QEMU NVMe Ctrl (12340 ): 2892 I/Os completed (+2892) 00:10:02.103 QEMU NVMe Ctrl (12341 ): 2602 I/Os completed (+2602) 00:10:02.103 00:10:03.037 QEMU NVMe Ctrl (12340 ): 6649 I/Os completed (+3757) 00:10:03.037 QEMU NVMe Ctrl (12341 ): 6248 I/Os completed (+3646) 00:10:03.037 00:10:03.970 QEMU NVMe Ctrl (12340 ): 9840 I/Os completed (+3191) 00:10:03.970 QEMU NVMe Ctrl (12341 ): 9325 I/Os completed (+3077) 00:10:03.970 00:10:04.903 QEMU NVMe Ctrl (12340 ): 13154 I/Os completed (+3314) 00:10:04.903 QEMU NVMe Ctrl (12341 ): 12451 I/Os completed (+3126) 00:10:04.903 00:10:05.835 QEMU NVMe Ctrl (12340 ): 16926 I/Os completed (+3772) 00:10:05.835 QEMU NVMe Ctrl (12341 ): 16209 I/Os completed (+3758) 00:10:05.835 00:10:07.206 QEMU NVMe Ctrl (12340 ): 20687 I/Os completed (+3761) 00:10:07.206 QEMU NVMe Ctrl (12341 ): 19937 I/Os completed (+3728) 00:10:07.206 00:10:08.139 QEMU NVMe Ctrl (12340 ): 24461 I/Os completed (+3774) 00:10:08.139 QEMU NVMe Ctrl (12341 ): 23693 I/Os completed (+3756) 00:10:08.139 00:10:09.115 QEMU NVMe Ctrl (12340 ): 28198 I/Os completed (+3737) 00:10:09.115 QEMU NVMe Ctrl (12341 ): 27422 I/Os completed (+3729) 00:10:09.115 00:10:10.081 QEMU NVMe Ctrl (12340 ): 31938 I/Os completed (+3740) 00:10:10.081 QEMU NVMe Ctrl (12341 ): 31127 I/Os completed (+3705) 00:10:10.081 00:10:11.015 QEMU NVMe Ctrl (12340 ): 35712 I/Os completed (+3774) 00:10:11.015 QEMU NVMe Ctrl (12341 ): 34896 I/Os completed (+3769) 00:10:11.015 00:10:11.948 QEMU NVMe Ctrl (12340 ): 39494 I/Os completed (+3782) 00:10:11.948 QEMU NVMe Ctrl (12341 ): 38656 I/Os completed (+3760) 00:10:11.948 00:10:12.882 QEMU NVMe Ctrl (12340 ): 43276 I/Os completed (+3782) 00:10:12.882 QEMU NVMe Ctrl (12341 ): 42430 I/Os completed (+3774) 00:10:12.882 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.141 [2024-11-19 19:59:46.881159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:13.141 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.141 [2024-11-19 19:59:46.882279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.882388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.882420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.882481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.141 [2024-11-19 19:59:46.884096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.884188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.884203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.884214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.141 [2024-11-19 19:59:46.904161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:13.141 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.141 [2024-11-19 19:59:46.905081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.905111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.905125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.905138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.141 [2024-11-19 19:59:46.906540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.906622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.906650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 [2024-11-19 19:59:46.906702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.141 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:13.141 EAL: Scan for (pci) bus failed. 00:10:13.141 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.402 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.402 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.402 19:59:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.402 Attaching to 0000:00:10.0 00:10:13.402 Attached to 0000:00:10.0 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.402 19:59:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.402 Attaching to 0000:00:11.0 00:10:13.402 Attached to 0000:00:11.0 00:10:13.402 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.402 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.402 [2024-11-19 19:59:47.133173] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:25.642 19:59:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.642 19:59:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.643 19:59:59 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.75 00:10:25.643 19:59:59 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.75 00:10:25.643 19:59:59 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:25.643 19:59:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.75 00:10:25.643 19:59:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.75 2 00:10:25.643 remove_attach_helper took 42.75s to complete (handling 2 nvme drive(s)) 19:59:59 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66627 00:10:32.228 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66627) - No such process 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66627 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67169 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67169 00:10:32.228 20:00:05 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67169 ']' 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.228 20:00:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.228 [2024-11-19 20:00:05.210452] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:32.228 [2024-11-19 20:00:05.210567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:10:32.228 [2024-11-19 20:00:05.368512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.228 [2024-11-19 20:00:05.464035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:32.489 20:00:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:32.489 20:00:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.080 20:00:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.080 20:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.080 20:00:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:39.080 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:39.080 [2024-11-19 20:00:12.145587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:39.080 [2024-11-19 20:00:12.146772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.080 [2024-11-19 20:00:12.146808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.080 [2024-11-19 20:00:12.146821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.080 [2024-11-19 20:00:12.146839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.080 [2024-11-19 20:00:12.146846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.080 [2024-11-19 20:00:12.146854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.080 [2024-11-19 20:00:12.146862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.146869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.146876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 [2024-11-19 20:00:12.146887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.146893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.146901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.081 20:00:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.081 20:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.081 [2024-11-19 20:00:12.645584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:39.081 [2024-11-19 20:00:12.646773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.646804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.646815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 [2024-11-19 20:00:12.646830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.646838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.646846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 [2024-11-19 20:00:12.646854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.646860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.646868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 [2024-11-19 20:00:12.646875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.081 [2024-11-19 20:00:12.646882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.081 [2024-11-19 20:00:12.646889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.081 20:00:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:39.081 20:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.646 20:00:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.646 20:00:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.646 20:00:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.646 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.904 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.904 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.904 20:00:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.102 [2024-11-19 20:00:25.545816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:52.102 [2024-11-19 20:00:25.546963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-19 20:00:25.546997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-19 20:00:25.547008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-19 20:00:25.547024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-19 20:00:25.547031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-19 20:00:25.547040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-19 20:00:25.547047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-19 20:00:25.547054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-19 20:00:25.547061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-19 20:00:25.547069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-19 20:00:25.547075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-19 20:00:25.547083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 20:00:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:52.102 20:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:52.361 [2024-11-19 20:00:25.945814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:52.361 [2024-11-19 20:00:25.947038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.361 [2024-11-19 20:00:25.947068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.361 [2024-11-19 20:00:25.947081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.361 [2024-11-19 20:00:25.947093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.361 [2024-11-19 20:00:25.947101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.361 [2024-11-19 20:00:25.947108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.361 [2024-11-19 20:00:25.947117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.361 [2024-11-19 20:00:25.947123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.361 [2024-11-19 20:00:25.947131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.361 [2024-11-19 20:00:25.947138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.361 [2024-11-19 20:00:25.947145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.361 [2024-11-19 20:00:25.947152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.361 20:00:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.361 20:00:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.361 20:00:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.361 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.619 20:00:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.820 20:00:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.820 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.820 [2024-11-19 20:00:38.446051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.820 [2024-11-19 20:00:38.447197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.820 [2024-11-19 20:00:38.447240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.820 [2024-11-19 20:00:38.447251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.820 [2024-11-19 20:00:38.447266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.820 [2024-11-19 20:00:38.447273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.820 [2024-11-19 20:00:38.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.820 [2024-11-19 20:00:38.447289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.820 [2024-11-19 20:00:38.447297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.820 [2024-11-19 20:00:38.447303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.820 [2024-11-19 20:00:38.447311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.820 [2024-11-19 20:00:38.447318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.820 [2024-11-19 20:00:38.447325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.387 20:00:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.387 20:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.387 [2024-11-19 20:00:38.946054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.387 [2024-11-19 20:00:38.947191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.387 [2024-11-19 20:00:38.947234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.387 [2024-11-19 20:00:38.947246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.387 [2024-11-19 20:00:38.947259] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.387 [2024-11-19 20:00:38.947267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.387 [2024-11-19 20:00:38.947274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.387 [2024-11-19 20:00:38.947282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.387 [2024-11-19 20:00:38.947289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.387 [2024-11-19 20:00:38.947298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.387 [2024-11-19 20:00:38.947305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.387 [2024-11-19 20:00:38.947312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.387 [2024-11-19 20:00:38.947319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.387 20:00:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:05.387 20:00:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.387 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.649 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.649 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.649 20:00:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.913 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.913 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.913 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.913 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.913 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:11:17.914 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:17.914 20:00:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:17.914 20:00:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.472 20:00:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.472 20:00:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 20:00:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.472 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.472 [2024-11-19 20:00:57.351976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.472 [2024-11-19 20:00:57.353076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.353111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.353137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.353145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.353153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.353160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.353169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.353184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.353190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.353200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.751973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.472 [2024-11-19 20:00:57.753096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.753128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.753139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.753150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.753158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.753165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.753173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.472 [2024-11-19 20:00:57.753180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.472 [2024-11-19 20:00:57.753188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.472 [2024-11-19 20:00:57.753196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.473 [2024-11-19 20:00:57.753203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.473 [2024-11-19 20:00:57.753210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.473 20:00:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.473 20:00:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.473 20:00:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.473 20:00:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.473 20:00:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.664 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.664 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.665 [2024-11-19 20:01:10.152206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.665 [2024-11-19 20:01:10.153608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.665 [2024-11-19 20:01:10.153703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.665 [2024-11-19 20:01:10.153738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.665 [2024-11-19 20:01:10.153772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.665 [2024-11-19 20:01:10.153789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.665 [2024-11-19 20:01:10.153852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.665 [2024-11-19 20:01:10.153879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.665 [2024-11-19 20:01:10.153898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.665 [2024-11-19 20:01:10.153921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.665 [2024-11-19 20:01:10.153993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.665 [2024-11-19 20:01:10.154032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.665 [2024-11-19 20:01:10.154059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.665 20:01:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.665 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.923 [2024-11-19 20:01:10.652210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.923 [2024-11-19 20:01:10.653357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.923 [2024-11-19 20:01:10.653446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.923 [2024-11-19 20:01:10.653513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.923 [2024-11-19 20:01:10.653570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.923 [2024-11-19 20:01:10.653591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.923 [2024-11-19 20:01:10.653638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.923 [2024-11-19 20:01:10.653665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.923 [2024-11-19 20:01:10.653705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.923 [2024-11-19 20:01:10.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.923 [2024-11-19 20:01:10.653782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.923 [2024-11-19 20:01:10.653802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.923 [2024-11-19 20:01:10.653825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.923 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.923 20:01:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.923 20:01:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.923 20:01:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.181 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.182 20:01:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.403 20:01:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.403 20:01:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.403 20:01:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.403 20:01:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.403 20:01:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.403 20:01:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.403 20:01:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.403 [2024-11-19 20:01:23.052433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.403 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.403 [2024-11-19 20:01:23.053393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.403 [2024-11-19 20:01:23.053419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.403 [2024-11-19 20:01:23.053429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.403 [2024-11-19 20:01:23.053445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.403 [2024-11-19 20:01:23.053452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.403 [2024-11-19 20:01:23.053460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.403 [2024-11-19 20:01:23.053468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.403 [2024-11-19 20:01:23.053477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.403 [2024-11-19 20:01:23.053484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.403 [2024-11-19 20:01:23.053492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.403 [2024-11-19 20:01:23.053498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.403 [2024-11-19 20:01:23.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.661 [2024-11-19 20:01:23.452432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.920 [2024-11-19 20:01:23.453288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.920 [2024-11-19 20:01:23.453315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.920 [2024-11-19 20:01:23.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.920 [2024-11-19 20:01:23.453337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.920 [2024-11-19 20:01:23.453345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.920 [2024-11-19 20:01:23.453352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.920 [2024-11-19 20:01:23.453361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.920 [2024-11-19 20:01:23.453368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.920 [2024-11-19 20:01:23.453376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.920 [2024-11-19 20:01:23.453383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.920 [2024-11-19 20:01:23.453392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.920 [2024-11-19 20:01:23.453399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.920 20:01:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.920 20:01:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.920 20:01:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.920 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.179 20:01:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.57 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.57 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.57 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.57 2 00:12:02.389 remove_attach_helper took 44.57s to complete (handling 2 nvme drive(s)) 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:02.389 20:01:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67169 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67169 ']' 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67169 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67169 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67169' 00:12:02.389 killing process with pid 67169 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67169 00:12:02.389 20:01:35 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67169 00:12:03.325 20:01:37 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.160 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.160 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.160 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.160 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.423 00:12:04.423 real 2m29.251s 00:12:04.423 user 1m51.041s 00:12:04.423 sys 0m16.614s 00:12:04.423 ************************************ 00:12:04.423 END TEST sw_hotplug 00:12:04.423 20:01:37 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.423 20:01:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.423 ************************************ 00:12:04.423 20:01:38 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:04.423 20:01:38 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.423 20:01:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.423 20:01:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.423 20:01:38 -- common/autotest_common.sh@10 -- # set +x 00:12:04.423 ************************************ 00:12:04.423 START TEST nvme_xnvme 00:12:04.423 ************************************ 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.423 * Looking for test storage... 00:12:04.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.423 --rc genhtml_branch_coverage=1 00:12:04.423 --rc genhtml_function_coverage=1 00:12:04.423 --rc genhtml_legend=1 00:12:04.423 --rc geninfo_all_blocks=1 00:12:04.423 --rc geninfo_unexecuted_blocks=1 00:12:04.423 00:12:04.423 ' 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.423 --rc genhtml_branch_coverage=1 00:12:04.423 --rc genhtml_function_coverage=1 00:12:04.423 --rc genhtml_legend=1 00:12:04.423 --rc geninfo_all_blocks=1 00:12:04.423 --rc geninfo_unexecuted_blocks=1 00:12:04.423 00:12:04.423 ' 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.423 --rc genhtml_branch_coverage=1 00:12:04.423 --rc genhtml_function_coverage=1 00:12:04.423 --rc genhtml_legend=1 00:12:04.423 --rc geninfo_all_blocks=1 00:12:04.423 --rc geninfo_unexecuted_blocks=1 00:12:04.423 00:12:04.423 ' 00:12:04.423 20:01:38 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.423 --rc genhtml_branch_coverage=1 00:12:04.423 --rc genhtml_function_coverage=1 00:12:04.423 --rc genhtml_legend=1 00:12:04.423 --rc geninfo_all_blocks=1 00:12:04.423 --rc geninfo_unexecuted_blocks=1 00:12:04.423 00:12:04.423 ' 00:12:04.423 20:01:38 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.423 20:01:38 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.685 20:01:38 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.685 20:01:38 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.685 20:01:38 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.685 20:01:38 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 20:01:38 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 20:01:38 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 20:01:38 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:04.685 20:01:38 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 20:01:38 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:04.685 20:01:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.685 20:01:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.685 20:01:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.685 ************************************ 00:12:04.685 START TEST xnvme_to_malloc_dd_copy 00:12:04.685 ************************************ 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:04.685 20:01:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:04.685 { 00:12:04.685 "subsystems": [ 00:12:04.685 { 00:12:04.685 "subsystem": "bdev", 00:12:04.685 "config": [ 00:12:04.685 { 00:12:04.685 "params": { 00:12:04.685 "block_size": 512, 00:12:04.685 "num_blocks": 2097152, 00:12:04.685 "name": "malloc0" 00:12:04.685 }, 00:12:04.685 "method": "bdev_malloc_create" 00:12:04.685 }, 00:12:04.685 { 00:12:04.685 "params": { 00:12:04.685 "io_mechanism": "libaio", 00:12:04.685 "filename": "/dev/nullb0", 00:12:04.685 "name": "null0" 00:12:04.685 }, 00:12:04.685 "method": "bdev_xnvme_create" 00:12:04.685 }, 00:12:04.685 { 00:12:04.685 "method": "bdev_wait_for_examine" 00:12:04.685 } 00:12:04.685 ] 00:12:04.685 } 00:12:04.685 ] 00:12:04.685 } 00:12:04.685 [2024-11-19 20:01:38.332334] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:04.685 [2024-11-19 20:01:38.332470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68550 ] 00:12:04.946 [2024-11-19 20:01:38.492930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.946 [2024-11-19 20:01:38.609416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.494  [2024-11-19T20:01:41.860Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-19T20:01:42.798Z] Copying: 453/1024 [MB] (226 MBps) [2024-11-19T20:01:43.733Z] Copying: 702/1024 [MB] (248 MBps) [2024-11-19T20:01:43.992Z] Copying: 1005/1024 [MB] (303 MBps) [2024-11-19T20:01:45.898Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:12:12.104 00:12:12.104 20:01:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:12.104 20:01:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:12.104 20:01:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.104 20:01:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.104 { 00:12:12.104 "subsystems": [ 00:12:12.104 { 00:12:12.104 "subsystem": "bdev", 00:12:12.104 "config": [ 00:12:12.104 { 00:12:12.104 "params": { 00:12:12.104 "block_size": 512, 00:12:12.104 "num_blocks": 2097152, 00:12:12.104 "name": "malloc0" 00:12:12.104 }, 00:12:12.104 "method": "bdev_malloc_create" 00:12:12.104 }, 00:12:12.104 { 00:12:12.104 "params": { 00:12:12.104 "io_mechanism": "libaio", 00:12:12.104 "filename": "/dev/nullb0", 00:12:12.104 "name": "null0" 00:12:12.104 }, 00:12:12.104 "method": "bdev_xnvme_create" 00:12:12.104 }, 00:12:12.104 { 00:12:12.104 "method": "bdev_wait_for_examine" 00:12:12.104 } 00:12:12.104 ] 00:12:12.104 } 00:12:12.104 ] 00:12:12.104 } 00:12:12.104 [2024-11-19 20:01:45.760742] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:12.104 [2024-11-19 20:01:45.760855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68633 ] 00:12:12.366 [2024-11-19 20:01:45.920510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.366 [2024-11-19 20:01:46.036692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.913  [2024-11-19T20:01:49.275Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-19T20:01:50.207Z] Copying: 488/1024 [MB] (261 MBps) [2024-11-19T20:01:51.142Z] Copying: 791/1024 [MB] (302 MBps) [2024-11-19T20:01:53.070Z] Copying: 1024/1024 [MB] (average 271 MBps) 00:12:19.276 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.276 20:01:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 { 00:12:19.276 "subsystems": [ 00:12:19.276 { 00:12:19.276 "subsystem": "bdev", 00:12:19.276 "config": [ 00:12:19.276 { 00:12:19.276 "params": { 00:12:19.276 "block_size": 512, 00:12:19.276 "num_blocks": 2097152, 00:12:19.276 "name": "malloc0" 00:12:19.276 }, 00:12:19.276 "method": "bdev_malloc_create" 00:12:19.276 }, 00:12:19.276 { 00:12:19.276 "params": { 00:12:19.276 "io_mechanism": "io_uring", 00:12:19.276 "filename": "/dev/nullb0", 00:12:19.276 "name": "null0" 00:12:19.276 }, 00:12:19.276 "method": "bdev_xnvme_create" 00:12:19.276 }, 00:12:19.276 { 00:12:19.276 "method": "bdev_wait_for_examine" 00:12:19.276 } 00:12:19.276 ] 00:12:19.276 } 00:12:19.276 ] 00:12:19.276 } 00:12:19.276 [2024-11-19 20:01:52.889358] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:19.276 [2024-11-19 20:01:52.889473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68720 ] 00:12:19.276 [2024-11-19 20:01:53.046001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.535 [2024-11-19 20:01:53.129392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.450  [2024-11-19T20:01:56.179Z] Copying: 312/1024 [MB] (312 MBps) [2024-11-19T20:01:57.114Z] Copying: 625/1024 [MB] (312 MBps) [2024-11-19T20:01:57.372Z] Copying: 937/1024 [MB] (312 MBps) [2024-11-19T20:01:59.280Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:25.486 00:12:25.486 20:01:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.487 20:01:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.487 20:01:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.487 20:01:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.487 { 00:12:25.487 "subsystems": [ 00:12:25.487 { 00:12:25.487 "subsystem": "bdev", 00:12:25.487 "config": [ 00:12:25.487 { 00:12:25.487 "params": { 00:12:25.487 "block_size": 512, 00:12:25.487 "num_blocks": 2097152, 00:12:25.487 "name": "malloc0" 00:12:25.487 }, 00:12:25.487 "method": "bdev_malloc_create" 00:12:25.487 }, 00:12:25.487 { 00:12:25.487 "params": { 00:12:25.487 "io_mechanism": "io_uring", 00:12:25.487 "filename": "/dev/nullb0", 00:12:25.487 "name": "null0" 00:12:25.487 }, 00:12:25.487 "method": "bdev_xnvme_create" 00:12:25.487 }, 00:12:25.487 { 00:12:25.487 "method": "bdev_wait_for_examine" 00:12:25.487 } 00:12:25.487 ] 00:12:25.487 } 00:12:25.487 ] 00:12:25.487 } 00:12:25.487 [2024-11-19 20:01:59.065929] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:25.487 [2024-11-19 20:01:59.066042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68796 ] 00:12:25.487 [2024-11-19 20:01:59.222348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.744 [2024-11-19 20:01:59.297885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.645  [2024-11-19T20:02:02.373Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-19T20:02:03.309Z] Copying: 636/1024 [MB] (318 MBps) [2024-11-19T20:02:03.309Z] Copying: 954/1024 [MB] (318 MBps) [2024-11-19T20:02:05.214Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:31.420 00:12:31.420 20:02:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:31.420 20:02:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:31.420 00:12:31.420 real 0m26.980s 00:12:31.420 user 0m23.452s 00:12:31.420 sys 0m2.980s 00:12:31.420 20:02:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.420 ************************************ 00:12:31.420 END TEST xnvme_to_malloc_dd_copy 00:12:31.420 ************************************ 00:12:31.420 20:02:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.682 20:02:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.682 20:02:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.682 20:02:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.682 20:02:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.682 ************************************ 00:12:31.682 START TEST xnvme_bdevperf 00:12:31.682 ************************************ 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.682 20:02:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.682 { 00:12:31.682 "subsystems": [ 00:12:31.682 { 00:12:31.682 "subsystem": "bdev", 00:12:31.682 "config": [ 00:12:31.682 { 00:12:31.682 "params": { 00:12:31.682 "io_mechanism": "libaio", 00:12:31.682 "filename": "/dev/nullb0", 00:12:31.682 "name": "null0" 00:12:31.682 }, 00:12:31.682 "method": "bdev_xnvme_create" 00:12:31.682 }, 00:12:31.682 { 00:12:31.682 "method": "bdev_wait_for_examine" 00:12:31.682 } 00:12:31.682 ] 00:12:31.682 } 00:12:31.682 ] 00:12:31.682 } 00:12:31.682 [2024-11-19 20:02:05.354590] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:31.682 [2024-11-19 20:02:05.354716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68896 ] 00:12:31.940 [2024-11-19 20:02:05.514007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.940 [2024-11-19 20:02:05.601510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.198 Running I/O for 5 seconds... 00:12:34.093 202432.00 IOPS, 790.75 MiB/s [2024-11-19T20:02:08.822Z] 202464.00 IOPS, 790.88 MiB/s [2024-11-19T20:02:10.204Z] 202645.33 IOPS, 791.58 MiB/s [2024-11-19T20:02:11.139Z] 202704.00 IOPS, 791.81 MiB/s [2024-11-19T20:02:11.139Z] 202611.20 IOPS, 791.45 MiB/s 00:12:37.345 Latency(us) 00:12:37.345 [2024-11-19T20:02:11.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.345 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.345 null0 : 5.00 202544.40 791.19 0.00 0.00 313.68 308.78 1556.48 00:12:37.345 [2024-11-19T20:02:11.139Z] =================================================================================================================== 00:12:37.345 [2024-11-19T20:02:11.139Z] Total : 202544.40 791.19 0.00 0.00 313.68 308.78 1556.48 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.605 20:02:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.864 { 00:12:37.864 "subsystems": [ 00:12:37.864 { 00:12:37.864 "subsystem": "bdev", 00:12:37.864 "config": [ 00:12:37.864 { 00:12:37.864 "params": { 00:12:37.864 "io_mechanism": "io_uring", 00:12:37.864 "filename": "/dev/nullb0", 00:12:37.864 "name": "null0" 00:12:37.864 }, 00:12:37.864 "method": "bdev_xnvme_create" 00:12:37.864 }, 00:12:37.864 { 00:12:37.864 "method": "bdev_wait_for_examine" 00:12:37.864 } 00:12:37.864 ] 00:12:37.864 } 00:12:37.864 ] 00:12:37.864 } 00:12:37.864 [2024-11-19 20:02:11.444464] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:37.864 [2024-11-19 20:02:11.444588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68967 ] 00:12:37.864 [2024-11-19 20:02:11.601678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.123 [2024-11-19 20:02:11.680823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.123 Running I/O for 5 seconds... 00:12:40.432 231808.00 IOPS, 905.50 MiB/s [2024-11-19T20:02:15.160Z] 231648.00 IOPS, 904.88 MiB/s [2024-11-19T20:02:16.092Z] 231616.00 IOPS, 904.75 MiB/s [2024-11-19T20:02:17.027Z] 231584.00 IOPS, 904.62 MiB/s 00:12:43.233 Latency(us) 00:12:43.233 [2024-11-19T20:02:17.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.233 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.233 null0 : 5.00 231549.07 904.49 0.00 0.00 273.98 155.18 1518.67 00:12:43.233 [2024-11-19T20:02:17.027Z] =================================================================================================================== 00:12:43.233 [2024-11-19T20:02:17.027Z] Total : 231549.07 904.49 0.00 0.00 273.98 155.18 1518.67 00:12:43.801 20:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:43.801 20:02:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:43.801 00:12:43.801 real 0m12.194s 00:12:43.801 user 0m9.847s 00:12:43.801 sys 0m2.105s 00:12:43.801 20:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.801 ************************************ 00:12:43.801 END TEST xnvme_bdevperf 00:12:43.801 ************************************ 00:12:43.801 20:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.801 00:12:43.801 real 0m39.441s 00:12:43.801 user 0m33.420s 00:12:43.801 sys 0m5.196s 00:12:43.801 20:02:17 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.801 20:02:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.801 ************************************ 00:12:43.801 END TEST nvme_xnvme 00:12:43.801 ************************************ 00:12:43.801 20:02:17 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:43.801 20:02:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.801 20:02:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.801 20:02:17 -- common/autotest_common.sh@10 -- # set +x 00:12:43.801 ************************************ 00:12:43.801 START TEST blockdev_xnvme 00:12:43.801 ************************************ 00:12:43.801 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:44.062 * Looking for test storage... 00:12:44.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.062 20:02:17 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.062 --rc genhtml_branch_coverage=1 00:12:44.062 --rc genhtml_function_coverage=1 00:12:44.062 --rc genhtml_legend=1 00:12:44.062 --rc geninfo_all_blocks=1 00:12:44.062 --rc geninfo_unexecuted_blocks=1 00:12:44.062 00:12:44.062 ' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.062 --rc genhtml_branch_coverage=1 00:12:44.062 --rc genhtml_function_coverage=1 00:12:44.062 --rc genhtml_legend=1 00:12:44.062 --rc geninfo_all_blocks=1 00:12:44.062 --rc geninfo_unexecuted_blocks=1 00:12:44.062 00:12:44.062 ' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.062 --rc genhtml_branch_coverage=1 00:12:44.062 --rc genhtml_function_coverage=1 00:12:44.062 --rc genhtml_legend=1 00:12:44.062 --rc geninfo_all_blocks=1 00:12:44.062 --rc geninfo_unexecuted_blocks=1 00:12:44.062 00:12:44.062 ' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.062 --rc genhtml_branch_coverage=1 00:12:44.062 --rc genhtml_function_coverage=1 00:12:44.062 --rc genhtml_legend=1 00:12:44.062 --rc geninfo_all_blocks=1 00:12:44.062 --rc geninfo_unexecuted_blocks=1 00:12:44.062 00:12:44.062 ' 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69109 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:44.062 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69109 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69109 ']' 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.062 20:02:17 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.063 20:02:17 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.063 20:02:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:44.063 20:02:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.063 [2024-11-19 20:02:17.810181] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:44.063 [2024-11-19 20:02:17.810347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69109 ] 00:12:44.324 [2024-11-19 20:02:17.970544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.324 [2024-11-19 20:02:18.096111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.269 20:02:18 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.269 20:02:18 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:45.269 20:02:18 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:45.269 20:02:18 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:45.269 20:02:18 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:45.269 20:02:18 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:45.269 20:02:18 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.533 Waiting for block devices as requested 00:12:45.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.792 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.792 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.792 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.060 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:51.060 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.060 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 nvme0n1 00:12:51.061 nvme1n1 00:12:51.061 nvme2n1 00:12:51.061 nvme2n2 00:12:51.061 nvme2n3 00:12:51.061 nvme3n1 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:51.061 20:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:51.061 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:51.062 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3a5f5127-a2ad-4bf5-ac18-8ce4a5835e7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3a5f5127-a2ad-4bf5-ac18-8ce4a5835e7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3638818c-ef4f-4376-9abd-08af8cecf00d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3638818c-ef4f-4376-9abd-08af8cecf00d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b0e327f4-a7cb-4aa6-aee0-97e75cbdbfb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0e327f4-a7cb-4aa6-aee0-97e75cbdbfb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "98e22a96-3db3-4c5f-b171-6540be8061ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "98e22a96-3db3-4c5f-b171-6540be8061ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ede6db11-e821-47b1-a1b1-6054c710aeeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ede6db11-e821-47b1-a1b1-6054c710aeeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3ff3746d-6f15-4aeb-a349-a15aab8439f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3ff3746d-6f15-4aeb-a349-a15aab8439f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:51.062 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:51.062 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:51.062 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:51.062 20:02:24 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69109 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69109 ']' 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69109 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69109 00:12:51.062 killing process with pid 69109 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69109' 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69109 00:12:51.062 20:02:24 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69109 00:12:52.438 20:02:25 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:52.438 20:02:25 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.438 20:02:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.438 20:02:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.438 20:02:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.438 ************************************ 00:12:52.438 START TEST bdev_hello_world 00:12:52.438 ************************************ 00:12:52.438 20:02:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.438 [2024-11-19 20:02:26.024591] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:52.438 [2024-11-19 20:02:26.024706] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69468 ] 00:12:52.438 [2024-11-19 20:02:26.182079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.697 [2024-11-19 20:02:26.265898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.955 [2024-11-19 20:02:26.546934] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:52.955 [2024-11-19 20:02:26.547091] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:52.955 [2024-11-19 20:02:26.547108] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:52.955 [2024-11-19 20:02:26.548564] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:52.955 [2024-11-19 20:02:26.548735] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:52.955 [2024-11-19 20:02:26.548750] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:52.955 [2024-11-19 20:02:26.549043] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:52.955 00:12:52.955 [2024-11-19 20:02:26.549062] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.522 00:12:53.522 real 0m1.128s 00:12:53.522 user 0m0.858s 00:12:53.522 sys 0m0.160s 00:12:53.522 20:02:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.522 20:02:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:53.522 ************************************ 00:12:53.522 END TEST bdev_hello_world 00:12:53.522 ************************************ 00:12:53.522 20:02:27 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.523 20:02:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.523 20:02:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.523 20:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 ************************************ 00:12:53.523 START TEST bdev_bounds 00:12:53.523 ************************************ 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:53.523 Process bdevio pid: 69499 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69499 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69499' 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69499 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69499 ']' 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.523 20:02:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 [2024-11-19 20:02:27.213917] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:53.523 [2024-11-19 20:02:27.214033] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69499 ] 00:12:53.782 [2024-11-19 20:02:27.370388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.782 [2024-11-19 20:02:27.449324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.782 [2024-11-19 20:02:27.449560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.782 [2024-11-19 20:02:27.449593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.349 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.349 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:54.349 20:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.349 I/O targets: 00:12:54.349 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:54.349 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:54.349 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.349 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.349 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.349 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:54.349 00:12:54.349 00:12:54.349 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.349 http://cunit.sourceforge.net/ 00:12:54.349 00:12:54.349 00:12:54.349 Suite: bdevio tests on: nvme3n1 00:12:54.349 Test: blockdev write read block ...passed 00:12:54.349 Test: blockdev write zeroes read block ...passed 00:12:54.349 Test: blockdev write zeroes read no split ...passed 00:12:54.349 Test: blockdev write zeroes read split ...passed 00:12:54.608 Test: blockdev write zeroes read split partial ...passed 00:12:54.608 Test: blockdev reset ...passed 00:12:54.608 Test: blockdev write read 8 blocks ...passed 00:12:54.608 Test: blockdev write read size > 128k ...passed 00:12:54.608 Test: blockdev write read invalid size ...passed 00:12:54.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.608 Test: blockdev write read max offset ...passed 00:12:54.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.608 Test: blockdev writev readv 8 blocks ...passed 00:12:54.608 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.608 Test: blockdev writev readv block ...passed 00:12:54.608 Test: blockdev writev readv size > 128k ...passed 00:12:54.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.608 Test: blockdev comparev and writev ...passed 00:12:54.608 Test: blockdev nvme passthru rw ...passed 00:12:54.608 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.608 Test: blockdev nvme admin passthru ...passed 00:12:54.608 Test: blockdev copy ...passed 00:12:54.608 Suite: bdevio tests on: nvme2n3 00:12:54.608 Test: blockdev write read block ...passed 00:12:54.608 Test: blockdev write zeroes read block ...passed 00:12:54.608 Test: blockdev write zeroes read no split ...passed 00:12:54.608 Test: blockdev write zeroes read split ...passed 00:12:54.608 Test: blockdev write zeroes read split partial ...passed 00:12:54.608 Test: blockdev reset ...passed 00:12:54.608 Test: blockdev write read 8 blocks ...passed 00:12:54.608 Test: blockdev write read size > 128k ...passed 00:12:54.608 Test: blockdev write read invalid size ...passed 00:12:54.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.608 Test: blockdev write read max offset ...passed 00:12:54.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.608 Test: blockdev writev readv 8 blocks ...passed 00:12:54.608 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.608 Test: blockdev writev readv block ...passed 00:12:54.608 Test: blockdev writev readv size > 128k ...passed 00:12:54.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.608 Test: blockdev comparev and writev ...passed 00:12:54.608 Test: blockdev nvme passthru rw ...passed 00:12:54.608 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.608 Test: blockdev nvme admin passthru ...passed 00:12:54.608 Test: blockdev copy ...passed 00:12:54.608 Suite: bdevio tests on: nvme2n2 00:12:54.608 Test: blockdev write read block ...passed 00:12:54.608 Test: blockdev write zeroes read block ...passed 00:12:54.608 Test: blockdev write zeroes read no split ...passed 00:12:54.608 Test: blockdev write zeroes read split ...passed 00:12:54.608 Test: blockdev write zeroes read split partial ...passed 00:12:54.608 Test: blockdev reset ...passed 00:12:54.608 Test: blockdev write read 8 blocks ...passed 00:12:54.608 Test: blockdev write read size > 128k ...passed 00:12:54.608 Test: blockdev write read invalid size ...passed 00:12:54.608 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.608 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.608 Test: blockdev write read max offset ...passed 00:12:54.608 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.608 Test: blockdev writev readv 8 blocks ...passed 00:12:54.608 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.608 Test: blockdev writev readv block ...passed 00:12:54.608 Test: blockdev writev readv size > 128k ...passed 00:12:54.608 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.608 Test: blockdev comparev and writev ...passed 00:12:54.608 Test: blockdev nvme passthru rw ...passed 00:12:54.608 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.608 Test: blockdev nvme admin passthru ...passed 00:12:54.608 Test: blockdev copy ...passed 00:12:54.608 Suite: bdevio tests on: nvme2n1 00:12:54.608 Test: blockdev write read block ...passed 00:12:54.608 Test: blockdev write zeroes read block ...passed 00:12:54.608 Test: blockdev write zeroes read no split ...passed 00:12:54.608 Test: blockdev write zeroes read split ...passed 00:12:54.608 Test: blockdev write zeroes read split partial ...passed 00:12:54.609 Test: blockdev reset ...passed 00:12:54.609 Test: blockdev write read 8 blocks ...passed 00:12:54.609 Test: blockdev write read size > 128k ...passed 00:12:54.609 Test: blockdev write read invalid size ...passed 00:12:54.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.609 Test: blockdev write read max offset ...passed 00:12:54.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.609 Test: blockdev writev readv 8 blocks ...passed 00:12:54.609 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.609 Test: blockdev writev readv block ...passed 00:12:54.609 Test: blockdev writev readv size > 128k ...passed 00:12:54.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.609 Test: blockdev comparev and writev ...passed 00:12:54.609 Test: blockdev nvme passthru rw ...passed 00:12:54.609 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.609 Test: blockdev nvme admin passthru ...passed 00:12:54.609 Test: blockdev copy ...passed 00:12:54.609 Suite: bdevio tests on: nvme1n1 00:12:54.609 Test: blockdev write read block ...passed 00:12:54.609 Test: blockdev write zeroes read block ...passed 00:12:54.609 Test: blockdev write zeroes read no split ...passed 00:12:54.609 Test: blockdev write zeroes read split ...passed 00:12:54.609 Test: blockdev write zeroes read split partial ...passed 00:12:54.609 Test: blockdev reset ...passed 00:12:54.609 Test: blockdev write read 8 blocks ...passed 00:12:54.609 Test: blockdev write read size > 128k ...passed 00:12:54.609 Test: blockdev write read invalid size ...passed 00:12:54.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.609 Test: blockdev write read max offset ...passed 00:12:54.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.609 Test: blockdev writev readv 8 blocks ...passed 00:12:54.609 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.609 Test: blockdev writev readv block ...passed 00:12:54.609 Test: blockdev writev readv size > 128k ...passed 00:12:54.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.609 Test: blockdev comparev and writev ...passed 00:12:54.609 Test: blockdev nvme passthru rw ...passed 00:12:54.609 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.609 Test: blockdev nvme admin passthru ...passed 00:12:54.609 Test: blockdev copy ...passed 00:12:54.609 Suite: bdevio tests on: nvme0n1 00:12:54.609 Test: blockdev write read block ...passed 00:12:54.609 Test: blockdev write zeroes read block ...passed 00:12:54.609 Test: blockdev write zeroes read no split ...passed 00:12:54.609 Test: blockdev write zeroes read split ...passed 00:12:54.609 Test: blockdev write zeroes read split partial ...passed 00:12:54.609 Test: blockdev reset ...passed 00:12:54.609 Test: blockdev write read 8 blocks ...passed 00:12:54.609 Test: blockdev write read size > 128k ...passed 00:12:54.609 Test: blockdev write read invalid size ...passed 00:12:54.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.609 Test: blockdev write read max offset ...passed 00:12:54.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.609 Test: blockdev writev readv 8 blocks ...passed 00:12:54.609 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.868 Test: blockdev writev readv block ...passed 00:12:54.868 Test: blockdev writev readv size > 128k ...passed 00:12:54.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.868 Test: blockdev comparev and writev ...passed 00:12:54.868 Test: blockdev nvme passthru rw ...passed 00:12:54.868 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.868 Test: blockdev nvme admin passthru ...passed 00:12:54.868 Test: blockdev copy ...passed 00:12:54.868 00:12:54.868 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.868 suites 6 6 n/a 0 0 00:12:54.868 tests 138 138 138 0 0 00:12:54.868 asserts 780 780 780 0 n/a 00:12:54.868 00:12:54.868 Elapsed time = 0.826 seconds 00:12:54.868 0 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69499 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69499 ']' 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69499 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69499 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69499' 00:12:54.868 killing process with pid 69499 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69499 00:12:54.868 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69499 00:12:55.438 20:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:55.438 00:12:55.438 real 0m1.844s 00:12:55.438 user 0m4.629s 00:12:55.438 sys 0m0.268s 00:12:55.438 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.438 20:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:55.438 ************************************ 00:12:55.438 END TEST bdev_bounds 00:12:55.438 ************************************ 00:12:55.438 20:02:29 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.438 20:02:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:55.438 20:02:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.438 20:02:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.438 ************************************ 00:12:55.438 START TEST bdev_nbd 00:12:55.438 ************************************ 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69555 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69555 /var/tmp/spdk-nbd.sock 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69555 ']' 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:55.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.438 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:55.438 [2024-11-19 20:02:29.129273] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:55.438 [2024-11-19 20:02:29.129511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.697 [2024-11-19 20:02:29.281862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.697 [2024-11-19 20:02:29.356344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.263 20:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.521 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.522 1+0 records in 00:12:56.522 1+0 records out 00:12:56.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308536 s, 13.3 MB/s 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.522 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.780 1+0 records in 00:12:56.780 1+0 records out 00:12:56.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449137 s, 9.1 MB/s 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.780 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.039 1+0 records in 00:12:57.039 1+0 records out 00:12:57.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442009 s, 9.3 MB/s 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.039 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.039 1+0 records in 00:12:57.039 1+0 records out 00:12:57.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333128 s, 12.3 MB/s 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.298 20:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.298 1+0 records in 00:12:57.298 1+0 records out 00:12:57.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447638 s, 9.2 MB/s 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.298 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.556 1+0 records in 00:12:57.556 1+0 records out 00:12:57.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430062 s, 9.5 MB/s 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.556 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.557 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.557 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd0", 00:12:57.815 "bdev_name": "nvme0n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd1", 00:12:57.815 "bdev_name": "nvme1n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd2", 00:12:57.815 "bdev_name": "nvme2n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd3", 00:12:57.815 "bdev_name": "nvme2n2" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd4", 00:12:57.815 "bdev_name": "nvme2n3" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd5", 00:12:57.815 "bdev_name": "nvme3n1" 00:12:57.815 } 00:12:57.815 ]' 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd0", 00:12:57.815 "bdev_name": "nvme0n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd1", 00:12:57.815 "bdev_name": "nvme1n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd2", 00:12:57.815 "bdev_name": "nvme2n1" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd3", 00:12:57.815 "bdev_name": "nvme2n2" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd4", 00:12:57.815 "bdev_name": "nvme2n3" 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "nbd_device": "/dev/nbd5", 00:12:57.815 "bdev_name": "nvme3n1" 00:12:57.815 } 00:12:57.815 ]' 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.815 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.073 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.073 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.073 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.073 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.073 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.074 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.074 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.074 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.074 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.074 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.332 20:02:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.590 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.849 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.107 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.366 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:59.366 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:59.366 20:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.366 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:59.631 /dev/nbd0 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.631 1+0 records in 00:12:59.631 1+0 records out 00:12:59.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209235 s, 19.6 MB/s 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.631 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:59.889 /dev/nbd1 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.889 1+0 records in 00:12:59.889 1+0 records out 00:12:59.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563033 s, 7.3 MB/s 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.889 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:59.889 /dev/nbd10 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.150 1+0 records in 00:13:00.150 1+0 records out 00:13:00.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429112 s, 9.5 MB/s 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:00.150 /dev/nbd11 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.150 1+0 records in 00:13:00.150 1+0 records out 00:13:00.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930218 s, 4.4 MB/s 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.150 20:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:00.412 /dev/nbd12 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.412 1+0 records in 00:13:00.412 1+0 records out 00:13:00.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106504 s, 3.8 MB/s 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.412 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:00.674 /dev/nbd13 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.674 1+0 records in 00:13:00.674 1+0 records out 00:13:00.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114704 s, 3.6 MB/s 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.674 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:00.934 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd0", 00:13:00.935 "bdev_name": "nvme0n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd1", 00:13:00.935 "bdev_name": "nvme1n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd10", 00:13:00.935 "bdev_name": "nvme2n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd11", 00:13:00.935 "bdev_name": "nvme2n2" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd12", 00:13:00.935 "bdev_name": "nvme2n3" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd13", 00:13:00.935 "bdev_name": "nvme3n1" 00:13:00.935 } 00:13:00.935 ]' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd0", 00:13:00.935 "bdev_name": "nvme0n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd1", 00:13:00.935 "bdev_name": "nvme1n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd10", 00:13:00.935 "bdev_name": "nvme2n1" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd11", 00:13:00.935 "bdev_name": "nvme2n2" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd12", 00:13:00.935 "bdev_name": "nvme2n3" 00:13:00.935 }, 00:13:00.935 { 00:13:00.935 "nbd_device": "/dev/nbd13", 00:13:00.935 "bdev_name": "nvme3n1" 00:13:00.935 } 00:13:00.935 ]' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:00.935 /dev/nbd1 00:13:00.935 /dev/nbd10 00:13:00.935 /dev/nbd11 00:13:00.935 /dev/nbd12 00:13:00.935 /dev/nbd13' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:00.935 /dev/nbd1 00:13:00.935 /dev/nbd10 00:13:00.935 /dev/nbd11 00:13:00.935 /dev/nbd12 00:13:00.935 /dev/nbd13' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:00.935 256+0 records in 00:13:00.935 256+0 records out 00:13:00.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101903 s, 103 MB/s 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.935 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.196 256+0 records in 00:13:01.196 256+0 records out 00:13:01.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235226 s, 4.5 MB/s 00:13:01.196 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.196 20:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:01.458 256+0 records in 00:13:01.458 256+0 records out 00:13:01.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.301094 s, 3.5 MB/s 00:13:01.458 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.458 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:01.720 256+0 records in 00:13:01.720 256+0 records out 00:13:01.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222549 s, 4.7 MB/s 00:13:01.720 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.720 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:02.010 256+0 records in 00:13:02.010 256+0 records out 00:13:02.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145647 s, 7.2 MB/s 00:13:02.010 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.010 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:02.295 256+0 records in 00:13:02.295 256+0 records out 00:13:02.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231929 s, 4.5 MB/s 00:13:02.295 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.295 20:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:02.295 256+0 records in 00:13:02.295 256+0 records out 00:13:02.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209716 s, 5.0 MB/s 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.295 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.556 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.818 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.080 20:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.338 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:03.339 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.339 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.339 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.339 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.596 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.854 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:04.112 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:04.371 malloc_lvol_verify 00:13:04.371 20:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:04.629 28a559d6-7152-4ba5-8575-00a67840d3fa 00:13:04.629 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:04.629 82c372ce-8bfb-4a7c-8000-dc7023358d3e 00:13:04.629 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:04.887 /dev/nbd0 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:04.887 mke2fs 1.47.0 (5-Feb-2023) 00:13:04.887 Discarding device blocks: 0/4096 done 00:13:04.887 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:04.887 00:13:04.887 Allocating group tables: 0/1 done 00:13:04.887 Writing inode tables: 0/1 done 00:13:04.887 Creating journal (1024 blocks): done 00:13:04.887 Writing superblocks and filesystem accounting information: 0/1 done 00:13:04.887 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.887 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69555 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69555 ']' 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69555 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69555 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69555' 00:13:05.146 killing process with pid 69555 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69555 00:13:05.146 20:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69555 00:13:05.712 20:02:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:05.712 00:13:05.712 real 0m10.348s 00:13:05.712 user 0m14.000s 00:13:05.712 sys 0m3.594s 00:13:05.712 20:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.712 ************************************ 00:13:05.712 END TEST bdev_nbd 00:13:05.712 ************************************ 00:13:05.712 20:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:05.712 20:02:39 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:05.712 20:02:39 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:05.712 20:02:39 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:05.712 20:02:39 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:05.712 20:02:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.712 20:02:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.712 20:02:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.712 ************************************ 00:13:05.712 START TEST bdev_fio 00:13:05.712 ************************************ 00:13:05.712 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:05.712 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:05.712 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:05.712 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:05.712 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:05.712 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:05.713 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:05.972 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:05.973 ************************************ 00:13:05.973 START TEST bdev_fio_rw_verify 00:13:05.973 ************************************ 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.973 20:02:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.973 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.973 fio-3.35 00:13:05.973 Starting 6 threads 00:13:18.198 00:13:18.198 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69956: Tue Nov 19 20:02:50 2024 00:13:18.198 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(860MiB/10002msec) 00:13:18.198 slat (usec): min=2, max=2685, avg= 5.34, stdev=15.60 00:13:18.198 clat (usec): min=69, max=7631, avg=849.06, stdev=705.47 00:13:18.198 lat (usec): min=75, max=7636, avg=854.40, stdev=706.12 00:13:18.198 clat percentiles (usec): 00:13:18.198 | 50.000th=[ 603], 99.000th=[ 3163], 99.900th=[ 4686], 99.990th=[ 5997], 00:13:18.198 | 99.999th=[ 7635] 00:13:18.198 write: IOPS=22.4k, BW=87.7MiB/s (91.9MB/s)(877MiB/10002msec); 0 zone resets 00:13:18.198 slat (usec): min=10, max=5025, avg=33.29, stdev=113.19 00:13:18.198 clat (usec): min=54, max=10560, avg=1028.39, stdev=795.25 00:13:18.198 lat (usec): min=68, max=10576, avg=1061.68, stdev=810.27 00:13:18.198 clat percentiles (usec): 00:13:18.199 | 50.000th=[ 742], 99.000th=[ 3621], 99.900th=[ 4817], 99.990th=[ 6128], 00:13:18.199 | 99.999th=[ 7177] 00:13:18.199 bw ( KiB/s): min=52924, max=144202, per=99.66%, avg=89464.53, stdev=4267.87, samples=114 00:13:18.199 iops : min=13229, max=36049, avg=22364.58, stdev=1067.01, samples=114 00:13:18.199 lat (usec) : 100=0.06%, 250=10.72%, 500=26.40%, 750=17.09%, 1000=10.24% 00:13:18.199 lat (msec) : 2=25.49%, 4=9.61%, 10=0.39%, 20=0.01% 00:13:18.199 cpu : usr=43.35%, sys=30.81%, ctx=7124, majf=0, minf=19966 00:13:18.199 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:18.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.199 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.199 issued rwts: total=220040,224471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.199 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:18.199 00:13:18.199 Run status group 0 (all jobs): 00:13:18.199 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=860MiB (901MB), run=10002-10002msec 00:13:18.199 WRITE: bw=87.7MiB/s (91.9MB/s), 87.7MiB/s-87.7MiB/s (91.9MB/s-91.9MB/s), io=877MiB (919MB), run=10002-10002msec 00:13:18.199 ----------------------------------------------------- 00:13:18.199 Suppressions used: 00:13:18.199 count bytes template 00:13:18.199 6 48 /usr/src/fio/parse.c 00:13:18.199 4274 410304 /usr/src/fio/iolog.c 00:13:18.199 1 8 libtcmalloc_minimal.so 00:13:18.199 1 904 libcrypto.so 00:13:18.199 ----------------------------------------------------- 00:13:18.199 00:13:18.199 00:13:18.199 real 0m11.804s 00:13:18.199 user 0m27.410s 00:13:18.199 sys 0m18.779s 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 ************************************ 00:13:18.199 END TEST bdev_fio_rw_verify 00:13:18.199 ************************************ 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3a5f5127-a2ad-4bf5-ac18-8ce4a5835e7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3a5f5127-a2ad-4bf5-ac18-8ce4a5835e7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3638818c-ef4f-4376-9abd-08af8cecf00d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3638818c-ef4f-4376-9abd-08af8cecf00d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b0e327f4-a7cb-4aa6-aee0-97e75cbdbfb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0e327f4-a7cb-4aa6-aee0-97e75cbdbfb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "98e22a96-3db3-4c5f-b171-6540be8061ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "98e22a96-3db3-4c5f-b171-6540be8061ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ede6db11-e821-47b1-a1b1-6054c710aeeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ede6db11-e821-47b1-a1b1-6054c710aeeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3ff3746d-6f15-4aeb-a349-a15aab8439f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3ff3746d-6f15-4aeb-a349-a15aab8439f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.199 /home/vagrant/spdk_repo/spdk 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:18.199 00:13:18.199 real 0m11.953s 00:13:18.199 user 0m27.481s 00:13:18.199 sys 0m18.842s 00:13:18.199 ************************************ 00:13:18.199 END TEST bdev_fio 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.199 20:02:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 ************************************ 00:13:18.199 20:02:51 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:18.200 20:02:51 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.200 20:02:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:18.200 20:02:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.200 20:02:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.200 ************************************ 00:13:18.200 START TEST bdev_verify 00:13:18.200 ************************************ 00:13:18.200 20:02:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.200 [2024-11-19 20:02:51.539212] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:18.200 [2024-11-19 20:02:51.539344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70127 ] 00:13:18.200 [2024-11-19 20:02:51.699808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.200 [2024-11-19 20:02:51.802758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.200 [2024-11-19 20:02:51.802862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.460 Running I/O for 5 seconds... 00:13:20.788 24311.00 IOPS, 94.96 MiB/s [2024-11-19T20:02:55.543Z] 23579.50 IOPS, 92.11 MiB/s [2024-11-19T20:02:56.484Z] 23264.00 IOPS, 90.87 MiB/s [2024-11-19T20:02:57.425Z] 23397.75 IOPS, 91.40 MiB/s [2024-11-19T20:02:57.425Z] 23033.60 IOPS, 89.97 MiB/s 00:13:23.631 Latency(us) 00:13:23.631 [2024-11-19T20:02:57.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.631 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0xa0000 00:13:23.631 nvme0n1 : 5.05 1900.39 7.42 0.00 0.00 67203.84 6377.16 70980.53 00:13:23.631 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0xa0000 length 0xa0000 00:13:23.631 nvme0n1 : 5.04 1801.84 7.04 0.00 0.00 70891.53 9124.63 70577.23 00:13:23.631 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0xbd0bd 00:13:23.631 nvme1n1 : 5.06 2207.56 8.62 0.00 0.00 57636.23 6049.48 61301.37 00:13:23.631 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:23.631 nvme1n1 : 5.05 2082.48 8.13 0.00 0.00 61173.91 5041.23 67754.14 00:13:23.631 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0x80000 00:13:23.631 nvme2n1 : 5.06 1921.23 7.50 0.00 0.00 66057.08 8116.38 70173.93 00:13:23.631 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x80000 length 0x80000 00:13:23.631 nvme2n1 : 5.06 1897.61 7.41 0.00 0.00 67071.09 6604.01 69770.63 00:13:23.631 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0x80000 00:13:23.631 nvme2n2 : 5.07 1893.89 7.40 0.00 0.00 66861.55 8267.62 70173.93 00:13:23.631 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x80000 length 0x80000 00:13:23.631 nvme2n2 : 5.06 1820.12 7.11 0.00 0.00 69749.55 7057.72 74610.22 00:13:23.631 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0x80000 00:13:23.631 nvme2n3 : 5.08 1915.63 7.48 0.00 0.00 65963.80 7713.08 70173.93 00:13:23.631 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x80000 length 0x80000 00:13:23.631 nvme2n3 : 5.05 1800.14 7.03 0.00 0.00 70375.69 8418.86 76223.41 00:13:23.631 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x0 length 0x20000 00:13:23.631 nvme3n1 : 5.08 1913.89 7.48 0.00 0.00 65961.51 1398.94 70173.93 00:13:23.631 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.631 Verification LBA range: start 0x20000 length 0x20000 00:13:23.631 nvme3n1 : 5.07 1818.72 7.10 0.00 0.00 69523.72 3957.37 70577.23 00:13:23.631 [2024-11-19T20:02:57.425Z] =================================================================================================================== 00:13:23.631 [2024-11-19T20:02:57.425Z] Total : 22973.51 89.74 0.00 0.00 66318.35 1398.94 76223.41 00:13:24.575 00:13:24.575 real 0m6.560s 00:13:24.575 user 0m11.184s 00:13:24.575 sys 0m0.992s 00:13:24.575 20:02:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.575 ************************************ 00:13:24.575 END TEST bdev_verify 00:13:24.575 ************************************ 00:13:24.575 20:02:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:24.575 20:02:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:24.575 20:02:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:24.575 20:02:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.575 20:02:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.575 ************************************ 00:13:24.575 START TEST bdev_verify_big_io 00:13:24.575 ************************************ 00:13:24.575 20:02:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:24.575 [2024-11-19 20:02:58.161950] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:24.575 [2024-11-19 20:02:58.162067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:13:24.575 [2024-11-19 20:02:58.321261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.837 [2024-11-19 20:02:58.442571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.837 [2024-11-19 20:02:58.442669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.411 Running I/O for 5 seconds... 00:13:31.258 1207.00 IOPS, 75.44 MiB/s [2024-11-19T20:03:05.052Z] 2719.50 IOPS, 169.97 MiB/s 00:13:31.258 Latency(us) 00:13:31.258 [2024-11-19T20:03:05.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.258 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0xa000 00:13:31.258 nvme0n1 : 5.90 116.57 7.29 0.00 0.00 1046404.75 83079.48 1832588.21 00:13:31.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0xa000 length 0xa000 00:13:31.258 nvme0n1 : 5.88 100.70 6.29 0.00 0.00 1197320.92 157286.40 1832588.21 00:13:31.258 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0xbd0b 00:13:31.258 nvme1n1 : 5.90 151.75 9.48 0.00 0.00 772741.63 13913.80 961463.53 00:13:31.258 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:31.258 nvme1n1 : 5.99 128.28 8.02 0.00 0.00 925898.70 12905.55 1355082.83 00:13:31.258 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0x8000 00:13:31.258 nvme2n1 : 5.91 120.54 7.53 0.00 0.00 943189.92 83079.48 955010.76 00:13:31.258 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x8000 length 0x8000 00:13:31.258 nvme2n1 : 6.09 126.15 7.88 0.00 0.00 889958.93 148413.83 896935.78 00:13:31.258 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0x8000 00:13:31.258 nvme2n2 : 6.11 145.06 9.07 0.00 0.00 772710.29 120182.94 832408.02 00:13:31.258 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x8000 length 0x8000 00:13:31.258 nvme2n2 : 6.09 133.96 8.37 0.00 0.00 839261.13 176644.73 993727.41 00:13:31.258 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0x8000 00:13:31.258 nvme2n3 : 6.11 112.52 7.03 0.00 0.00 976782.28 23189.66 2413337.99 00:13:31.258 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x8000 length 0x8000 00:13:31.258 nvme2n3 : 6.10 133.70 8.36 0.00 0.00 814114.53 76626.71 2193943.63 00:13:31.258 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x0 length 0x2000 00:13:31.258 nvme3n1 : 6.12 141.25 8.83 0.00 0.00 748677.97 15022.87 2129415.88 00:13:31.258 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.258 Verification LBA range: start 0x2000 length 0x2000 00:13:31.258 nvme3n1 : 6.11 197.77 12.36 0.00 0.00 533180.51 3478.45 1051802.39 00:13:31.258 [2024-11-19T20:03:05.052Z] =================================================================================================================== 00:13:31.258 [2024-11-19T20:03:05.052Z] Total : 1608.24 100.51 0.00 0.00 843536.25 3478.45 2413337.99 00:13:32.647 00:13:32.647 real 0m7.939s 00:13:32.647 user 0m14.526s 00:13:32.647 sys 0m0.457s 00:13:32.647 ************************************ 00:13:32.647 END TEST bdev_verify_big_io 00:13:32.647 ************************************ 00:13:32.647 20:03:06 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.647 20:03:06 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.647 20:03:06 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.647 20:03:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:32.647 20:03:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.647 20:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.647 ************************************ 00:13:32.647 START TEST bdev_write_zeroes 00:13:32.647 ************************************ 00:13:32.647 20:03:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.647 [2024-11-19 20:03:06.176943] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:32.647 [2024-11-19 20:03:06.177086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70344 ] 00:13:32.647 [2024-11-19 20:03:06.342635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.908 [2024-11-19 20:03:06.463287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.169 Running I/O for 1 seconds... 00:13:34.393 81536.00 IOPS, 318.50 MiB/s 00:13:34.393 Latency(us) 00:13:34.393 [2024-11-19T20:03:08.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.393 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme0n1 : 1.03 13147.57 51.36 0.00 0.00 9725.59 5822.62 22181.42 00:13:34.393 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme1n1 : 1.03 14720.06 57.50 0.00 0.00 8677.50 4713.55 26617.70 00:13:34.393 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme2n1 : 1.03 13082.60 51.10 0.00 0.00 9757.26 5923.45 25306.98 00:13:34.393 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme2n2 : 1.03 13067.87 51.05 0.00 0.00 9712.64 5167.26 24097.08 00:13:34.393 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme2n3 : 1.03 13053.02 50.99 0.00 0.00 9717.62 5167.26 22887.19 00:13:34.393 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.393 nvme3n1 : 1.03 13038.26 50.93 0.00 0.00 9718.32 5091.64 21778.12 00:13:34.393 [2024-11-19T20:03:08.187Z] =================================================================================================================== 00:13:34.393 [2024-11-19T20:03:08.187Z] Total : 80109.37 312.93 0.00 0.00 9534.12 4713.55 26617.70 00:13:34.966 00:13:34.966 real 0m2.623s 00:13:34.966 user 0m1.948s 00:13:34.966 sys 0m0.471s 00:13:34.966 20:03:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.966 ************************************ 00:13:34.966 END TEST bdev_write_zeroes 00:13:34.966 ************************************ 00:13:34.966 20:03:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.228 20:03:08 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.228 20:03:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.228 20:03:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.228 20:03:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.228 ************************************ 00:13:35.228 START TEST bdev_json_nonenclosed 00:13:35.228 ************************************ 00:13:35.228 20:03:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.228 [2024-11-19 20:03:08.874777] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:35.228 [2024-11-19 20:03:08.874925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70387 ] 00:13:35.489 [2024-11-19 20:03:09.039456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.489 [2024-11-19 20:03:09.159528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.489 [2024-11-19 20:03:09.159634] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:35.489 [2024-11-19 20:03:09.159655] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:35.489 [2024-11-19 20:03:09.159665] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:35.750 00:13:35.750 real 0m0.553s 00:13:35.750 user 0m0.332s 00:13:35.750 sys 0m0.115s 00:13:35.750 20:03:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.750 ************************************ 00:13:35.750 END TEST bdev_json_nonenclosed 00:13:35.750 ************************************ 00:13:35.750 20:03:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:35.750 20:03:09 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.750 20:03:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.750 20:03:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.750 20:03:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.750 ************************************ 00:13:35.750 START TEST bdev_json_nonarray 00:13:35.750 ************************************ 00:13:35.750 20:03:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.750 [2024-11-19 20:03:09.499282] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:35.750 [2024-11-19 20:03:09.499428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:13:36.012 [2024-11-19 20:03:09.665879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.012 [2024-11-19 20:03:09.785897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.012 [2024-11-19 20:03:09.786012] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.012 [2024-11-19 20:03:09.786032] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.012 [2024-11-19 20:03:09.786043] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.274 00:13:36.274 real 0m0.550s 00:13:36.274 user 0m0.331s 00:13:36.274 sys 0m0.113s 00:13:36.274 20:03:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.274 20:03:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:36.274 ************************************ 00:13:36.274 END TEST bdev_json_nonarray 00:13:36.274 ************************************ 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:36.274 20:03:10 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:36.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:46.866 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.866 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.400 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.400 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.400 00:13:49.400 real 1m5.598s 00:13:49.400 user 1m24.484s 00:13:49.400 sys 0m52.010s 00:13:49.400 ************************************ 00:13:49.400 END TEST blockdev_xnvme 00:13:49.400 ************************************ 00:13:49.400 20:03:23 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.400 20:03:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.662 20:03:23 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:49.662 20:03:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.662 20:03:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.662 20:03:23 -- common/autotest_common.sh@10 -- # set +x 00:13:49.662 ************************************ 00:13:49.662 START TEST ublk 00:13:49.662 ************************************ 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:49.662 * Looking for test storage... 00:13:49.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.662 20:03:23 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.662 20:03:23 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.662 20:03:23 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.662 20:03:23 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.662 20:03:23 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.662 20:03:23 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:49.662 20:03:23 ublk -- scripts/common.sh@345 -- # : 1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.662 20:03:23 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.662 20:03:23 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@353 -- # local d=1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.662 20:03:23 ublk -- scripts/common.sh@355 -- # echo 1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.662 20:03:23 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@353 -- # local d=2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.662 20:03:23 ublk -- scripts/common.sh@355 -- # echo 2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.662 20:03:23 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.662 20:03:23 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.662 20:03:23 ublk -- scripts/common.sh@368 -- # return 0 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.662 20:03:23 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.663 --rc genhtml_branch_coverage=1 00:13:49.663 --rc genhtml_function_coverage=1 00:13:49.663 --rc genhtml_legend=1 00:13:49.663 --rc geninfo_all_blocks=1 00:13:49.663 --rc geninfo_unexecuted_blocks=1 00:13:49.663 00:13:49.663 ' 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.663 --rc genhtml_branch_coverage=1 00:13:49.663 --rc genhtml_function_coverage=1 00:13:49.663 --rc genhtml_legend=1 00:13:49.663 --rc geninfo_all_blocks=1 00:13:49.663 --rc geninfo_unexecuted_blocks=1 00:13:49.663 00:13:49.663 ' 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.663 --rc genhtml_branch_coverage=1 00:13:49.663 --rc genhtml_function_coverage=1 00:13:49.663 --rc genhtml_legend=1 00:13:49.663 --rc geninfo_all_blocks=1 00:13:49.663 --rc geninfo_unexecuted_blocks=1 00:13:49.663 00:13:49.663 ' 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.663 --rc genhtml_branch_coverage=1 00:13:49.663 --rc genhtml_function_coverage=1 00:13:49.663 --rc genhtml_legend=1 00:13:49.663 --rc geninfo_all_blocks=1 00:13:49.663 --rc geninfo_unexecuted_blocks=1 00:13:49.663 00:13:49.663 ' 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:49.663 20:03:23 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:49.663 20:03:23 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:49.663 20:03:23 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:49.663 20:03:23 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:49.663 20:03:23 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:49.663 20:03:23 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:49.663 20:03:23 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:49.663 20:03:23 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:49.663 20:03:23 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.663 20:03:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.663 ************************************ 00:13:49.663 START TEST test_save_ublk_config 00:13:49.663 ************************************ 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70715 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70715 00:13:49.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70715 ']' 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.663 20:03:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.925 [2024-11-19 20:03:23.475781] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:49.925 [2024-11-19 20:03:23.476144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70715 ] 00:13:49.925 [2024-11-19 20:03:23.639873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.186 [2024-11-19 20:03:23.767057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.758 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.758 [2024-11-19 20:03:24.486247] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:50.758 [2024-11-19 20:03:24.487154] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.019 malloc0 00:13:51.019 [2024-11-19 20:03:24.558396] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:51.019 [2024-11-19 20:03:24.558490] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:51.019 [2024-11-19 20:03:24.558501] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:51.019 [2024-11-19 20:03:24.558508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:51.019 [2024-11-19 20:03:24.567343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:51.019 [2024-11-19 20:03:24.567373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:51.019 [2024-11-19 20:03:24.574261] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:51.019 [2024-11-19 20:03:24.574389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:51.019 [2024-11-19 20:03:24.591252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:51.019 0 00:13:51.019 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.019 20:03:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:51.019 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.019 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.281 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.281 20:03:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:51.281 "subsystems": [ 00:13:51.281 { 00:13:51.281 "subsystem": "fsdev", 00:13:51.281 "config": [ 00:13:51.281 { 00:13:51.281 "method": "fsdev_set_opts", 00:13:51.281 "params": { 00:13:51.281 "fsdev_io_pool_size": 65535, 00:13:51.281 "fsdev_io_cache_size": 256 00:13:51.281 } 00:13:51.281 } 00:13:51.281 ] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "keyring", 00:13:51.281 "config": [] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "iobuf", 00:13:51.281 "config": [ 00:13:51.281 { 00:13:51.281 "method": "iobuf_set_options", 00:13:51.281 "params": { 00:13:51.281 "small_pool_count": 8192, 00:13:51.281 "large_pool_count": 1024, 00:13:51.281 "small_bufsize": 8192, 00:13:51.281 "large_bufsize": 135168, 00:13:51.281 "enable_numa": false 00:13:51.281 } 00:13:51.281 } 00:13:51.281 ] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "sock", 00:13:51.281 "config": [ 00:13:51.281 { 00:13:51.281 "method": "sock_set_default_impl", 00:13:51.281 "params": { 00:13:51.281 "impl_name": "posix" 00:13:51.281 } 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "method": "sock_impl_set_options", 00:13:51.281 "params": { 00:13:51.281 "impl_name": "ssl", 00:13:51.281 "recv_buf_size": 4096, 00:13:51.281 "send_buf_size": 4096, 00:13:51.281 "enable_recv_pipe": true, 00:13:51.281 "enable_quickack": false, 00:13:51.281 "enable_placement_id": 0, 00:13:51.281 "enable_zerocopy_send_server": true, 00:13:51.281 "enable_zerocopy_send_client": false, 00:13:51.281 "zerocopy_threshold": 0, 00:13:51.281 "tls_version": 0, 00:13:51.281 "enable_ktls": false 00:13:51.281 } 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "method": "sock_impl_set_options", 00:13:51.281 "params": { 00:13:51.281 "impl_name": "posix", 00:13:51.281 "recv_buf_size": 2097152, 00:13:51.281 "send_buf_size": 2097152, 00:13:51.281 "enable_recv_pipe": true, 00:13:51.281 "enable_quickack": false, 00:13:51.281 "enable_placement_id": 0, 00:13:51.281 "enable_zerocopy_send_server": true, 00:13:51.281 "enable_zerocopy_send_client": false, 00:13:51.281 "zerocopy_threshold": 0, 00:13:51.281 "tls_version": 0, 00:13:51.281 "enable_ktls": false 00:13:51.281 } 00:13:51.281 } 00:13:51.281 ] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "vmd", 00:13:51.281 "config": [] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "accel", 00:13:51.281 "config": [ 00:13:51.281 { 00:13:51.281 "method": "accel_set_options", 00:13:51.281 "params": { 00:13:51.281 "small_cache_size": 128, 00:13:51.281 "large_cache_size": 16, 00:13:51.281 "task_count": 2048, 00:13:51.281 "sequence_count": 2048, 00:13:51.281 "buf_count": 2048 00:13:51.281 } 00:13:51.281 } 00:13:51.281 ] 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "subsystem": "bdev", 00:13:51.281 "config": [ 00:13:51.281 { 00:13:51.281 "method": "bdev_set_options", 00:13:51.281 "params": { 00:13:51.281 "bdev_io_pool_size": 65535, 00:13:51.281 "bdev_io_cache_size": 256, 00:13:51.281 "bdev_auto_examine": true, 00:13:51.281 "iobuf_small_cache_size": 128, 00:13:51.281 "iobuf_large_cache_size": 16 00:13:51.281 } 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "method": "bdev_raid_set_options", 00:13:51.281 "params": { 00:13:51.281 "process_window_size_kb": 1024, 00:13:51.281 "process_max_bandwidth_mb_sec": 0 00:13:51.281 } 00:13:51.281 }, 00:13:51.281 { 00:13:51.281 "method": "bdev_iscsi_set_options", 00:13:51.281 "params": { 00:13:51.281 "timeout_sec": 30 00:13:51.281 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "bdev_nvme_set_options", 00:13:51.282 "params": { 00:13:51.282 "action_on_timeout": "none", 00:13:51.282 "timeout_us": 0, 00:13:51.282 "timeout_admin_us": 0, 00:13:51.282 "keep_alive_timeout_ms": 10000, 00:13:51.282 "arbitration_burst": 0, 00:13:51.282 "low_priority_weight": 0, 00:13:51.282 "medium_priority_weight": 0, 00:13:51.282 "high_priority_weight": 0, 00:13:51.282 "nvme_adminq_poll_period_us": 10000, 00:13:51.282 "nvme_ioq_poll_period_us": 0, 00:13:51.282 "io_queue_requests": 0, 00:13:51.282 "delay_cmd_submit": true, 00:13:51.282 "transport_retry_count": 4, 00:13:51.282 "bdev_retry_count": 3, 00:13:51.282 "transport_ack_timeout": 0, 00:13:51.282 "ctrlr_loss_timeout_sec": 0, 00:13:51.282 "reconnect_delay_sec": 0, 00:13:51.282 "fast_io_fail_timeout_sec": 0, 00:13:51.282 "disable_auto_failback": false, 00:13:51.282 "generate_uuids": false, 00:13:51.282 "transport_tos": 0, 00:13:51.282 "nvme_error_stat": false, 00:13:51.282 "rdma_srq_size": 0, 00:13:51.282 "io_path_stat": false, 00:13:51.282 "allow_accel_sequence": false, 00:13:51.282 "rdma_max_cq_size": 0, 00:13:51.282 "rdma_cm_event_timeout_ms": 0, 00:13:51.282 "dhchap_digests": [ 00:13:51.282 "sha256", 00:13:51.282 "sha384", 00:13:51.282 "sha512" 00:13:51.282 ], 00:13:51.282 "dhchap_dhgroups": [ 00:13:51.282 "null", 00:13:51.282 "ffdhe2048", 00:13:51.282 "ffdhe3072", 00:13:51.282 "ffdhe4096", 00:13:51.282 "ffdhe6144", 00:13:51.282 "ffdhe8192" 00:13:51.282 ] 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "bdev_nvme_set_hotplug", 00:13:51.282 "params": { 00:13:51.282 "period_us": 100000, 00:13:51.282 "enable": false 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "bdev_malloc_create", 00:13:51.282 "params": { 00:13:51.282 "name": "malloc0", 00:13:51.282 "num_blocks": 8192, 00:13:51.282 "block_size": 4096, 00:13:51.282 "physical_block_size": 4096, 00:13:51.282 "uuid": "8a6212e1-46de-4ab3-996f-2ea130414214", 00:13:51.282 "optimal_io_boundary": 0, 00:13:51.282 "md_size": 0, 00:13:51.282 "dif_type": 0, 00:13:51.282 "dif_is_head_of_md": false, 00:13:51.282 "dif_pi_format": 0 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "bdev_wait_for_examine" 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "scsi", 00:13:51.282 "config": null 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "scheduler", 00:13:51.282 "config": [ 00:13:51.282 { 00:13:51.282 "method": "framework_set_scheduler", 00:13:51.282 "params": { 00:13:51.282 "name": "static" 00:13:51.282 } 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "vhost_scsi", 00:13:51.282 "config": [] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "vhost_blk", 00:13:51.282 "config": [] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "ublk", 00:13:51.282 "config": [ 00:13:51.282 { 00:13:51.282 "method": "ublk_create_target", 00:13:51.282 "params": { 00:13:51.282 "cpumask": "1" 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "ublk_start_disk", 00:13:51.282 "params": { 00:13:51.282 "bdev_name": "malloc0", 00:13:51.282 "ublk_id": 0, 00:13:51.282 "num_queues": 1, 00:13:51.282 "queue_depth": 128 00:13:51.282 } 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "nbd", 00:13:51.282 "config": [] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "nvmf", 00:13:51.282 "config": [ 00:13:51.282 { 00:13:51.282 "method": "nvmf_set_config", 00:13:51.282 "params": { 00:13:51.282 "discovery_filter": "match_any", 00:13:51.282 "admin_cmd_passthru": { 00:13:51.282 "identify_ctrlr": false 00:13:51.282 }, 00:13:51.282 "dhchap_digests": [ 00:13:51.282 "sha256", 00:13:51.282 "sha384", 00:13:51.282 "sha512" 00:13:51.282 ], 00:13:51.282 "dhchap_dhgroups": [ 00:13:51.282 "null", 00:13:51.282 "ffdhe2048", 00:13:51.282 "ffdhe3072", 00:13:51.282 "ffdhe4096", 00:13:51.282 "ffdhe6144", 00:13:51.282 "ffdhe8192" 00:13:51.282 ] 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "nvmf_set_max_subsystems", 00:13:51.282 "params": { 00:13:51.282 "max_subsystems": 1024 00:13:51.282 } 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "method": "nvmf_set_crdt", 00:13:51.282 "params": { 00:13:51.282 "crdt1": 0, 00:13:51.282 "crdt2": 0, 00:13:51.282 "crdt3": 0 00:13:51.282 } 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "subsystem": "iscsi", 00:13:51.282 "config": [ 00:13:51.282 { 00:13:51.282 "method": "iscsi_set_options", 00:13:51.282 "params": { 00:13:51.282 "node_base": "iqn.2016-06.io.spdk", 00:13:51.282 "max_sessions": 128, 00:13:51.282 "max_connections_per_session": 2, 00:13:51.282 "max_queue_depth": 64, 00:13:51.282 "default_time2wait": 2, 00:13:51.282 "default_time2retain": 20, 00:13:51.282 "first_burst_length": 8192, 00:13:51.282 "immediate_data": true, 00:13:51.282 "allow_duplicated_isid": false, 00:13:51.282 "error_recovery_level": 0, 00:13:51.282 "nop_timeout": 60, 00:13:51.282 "nop_in_interval": 30, 00:13:51.282 "disable_chap": false, 00:13:51.282 "require_chap": false, 00:13:51.282 "mutual_chap": false, 00:13:51.282 "chap_group": 0, 00:13:51.282 "max_large_datain_per_connection": 64, 00:13:51.282 "max_r2t_per_connection": 4, 00:13:51.282 "pdu_pool_size": 36864, 00:13:51.282 "immediate_data_pool_size": 16384, 00:13:51.282 "data_out_pool_size": 2048 00:13:51.282 } 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 }' 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70715 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70715 ']' 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70715 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70715 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.282 killing process with pid 70715 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70715' 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70715 00:13:51.282 20:03:24 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70715 00:13:52.227 [2024-11-19 20:03:26.014835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.485 [2024-11-19 20:03:26.048291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.485 [2024-11-19 20:03:26.048435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.485 [2024-11-19 20:03:26.056265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.485 [2024-11-19 20:03:26.056328] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.485 [2024-11-19 20:03:26.056343] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.485 [2024-11-19 20:03:26.056376] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.485 [2024-11-19 20:03:26.056536] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70771 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70771 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70771 ']' 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.860 20:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:53.860 "subsystems": [ 00:13:53.860 { 00:13:53.860 "subsystem": "fsdev", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "fsdev_set_opts", 00:13:53.860 "params": { 00:13:53.860 "fsdev_io_pool_size": 65535, 00:13:53.860 "fsdev_io_cache_size": 256 00:13:53.860 } 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "keyring", 00:13:53.860 "config": [] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "iobuf", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "iobuf_set_options", 00:13:53.860 "params": { 00:13:53.860 "small_pool_count": 8192, 00:13:53.860 "large_pool_count": 1024, 00:13:53.860 "small_bufsize": 8192, 00:13:53.860 "large_bufsize": 135168, 00:13:53.860 "enable_numa": false 00:13:53.860 } 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "sock", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "sock_set_default_impl", 00:13:53.860 "params": { 00:13:53.860 "impl_name": "posix" 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "sock_impl_set_options", 00:13:53.860 "params": { 00:13:53.860 "impl_name": "ssl", 00:13:53.860 "recv_buf_size": 4096, 00:13:53.860 "send_buf_size": 4096, 00:13:53.860 "enable_recv_pipe": true, 00:13:53.860 "enable_quickack": false, 00:13:53.860 "enable_placement_id": 0, 00:13:53.860 "enable_zerocopy_send_server": true, 00:13:53.860 "enable_zerocopy_send_client": false, 00:13:53.860 "zerocopy_threshold": 0, 00:13:53.860 "tls_version": 0, 00:13:53.860 "enable_ktls": false 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "sock_impl_set_options", 00:13:53.860 "params": { 00:13:53.860 "impl_name": "posix", 00:13:53.860 "recv_buf_size": 2097152, 00:13:53.860 "send_buf_size": 2097152, 00:13:53.860 "enable_recv_pipe": true, 00:13:53.860 "enable_quickack": false, 00:13:53.860 "enable_placement_id": 0, 00:13:53.860 "enable_zerocopy_send_server": true, 00:13:53.860 "enable_zerocopy_send_client": false, 00:13:53.860 "zerocopy_threshold": 0, 00:13:53.860 "tls_version": 0, 00:13:53.860 "enable_ktls": false 00:13:53.860 } 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "vmd", 00:13:53.860 "config": [] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "accel", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "accel_set_options", 00:13:53.860 "params": { 00:13:53.860 "small_cache_size": 128, 00:13:53.860 "large_cache_size": 16, 00:13:53.860 "task_count": 2048, 00:13:53.860 "sequence_count": 2048, 00:13:53.860 "buf_count": 2048 00:13:53.860 } 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "bdev", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "bdev_set_options", 00:13:53.860 "params": { 00:13:53.860 "bdev_io_pool_size": 65535, 00:13:53.860 "bdev_io_cache_size": 256, 00:13:53.860 "bdev_auto_examine": true, 00:13:53.860 "iobuf_small_cache_size": 128, 00:13:53.860 "iobuf_large_cache_size": 16 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_raid_set_options", 00:13:53.860 "params": { 00:13:53.860 "process_window_size_kb": 1024, 00:13:53.860 "process_max_bandwidth_mb_sec": 0 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_iscsi_set_options", 00:13:53.860 "params": { 00:13:53.860 "timeout_sec": 30 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_nvme_set_options", 00:13:53.860 "params": { 00:13:53.860 "action_on_timeout": "none", 00:13:53.860 "timeout_us": 0, 00:13:53.860 "timeout_admin_us": 0, 00:13:53.860 "keep_alive_timeout_ms": 10000, 00:13:53.860 "arbitration_burst": 0, 00:13:53.860 "low_priority_weight": 0, 00:13:53.860 "medium_priority_weight": 0, 00:13:53.860 "high_priority_weight": 0, 00:13:53.860 "nvme_adminq_poll_period_us": 10000, 00:13:53.860 "nvme_ioq_poll_period_us": 0, 00:13:53.860 "io_queue_requests": 0, 00:13:53.860 "delay_cmd_submit": true, 00:13:53.860 "transport_retry_count": 4, 00:13:53.860 "bdev_retry_count": 3, 00:13:53.860 "transport_ack_timeout": 0, 00:13:53.860 "ctrlr_loss_timeout_sec": 0, 00:13:53.860 "reconnect_delay_sec": 0, 00:13:53.860 "fast_io_fail_timeout_sec": 0, 00:13:53.860 "disable_auto_failback": false, 00:13:53.860 "generate_uuids": false, 00:13:53.860 "transport_tos": 0, 00:13:53.860 "nvme_error_stat": false, 00:13:53.860 "rdma_srq_size": 0, 00:13:53.860 "io_path_stat": false, 00:13:53.860 "allow_accel_sequence": false, 00:13:53.860 "rdma_max_cq_size": 0, 00:13:53.860 "rdma_cm_event_timeout_ms": 0, 00:13:53.860 "dhchap_digests": [ 00:13:53.860 "sha256", 00:13:53.860 "sha384", 00:13:53.860 "sha512" 00:13:53.860 ], 00:13:53.860 "dhchap_dhgroups": [ 00:13:53.860 "null", 00:13:53.860 "ffdhe2048", 00:13:53.860 "ffdhe3072", 00:13:53.860 "ffdhe4096", 00:13:53.860 "ffdhe6144", 00:13:53.860 "ffdhe8192" 00:13:53.860 ] 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_nvme_set_hotplug", 00:13:53.860 "params": { 00:13:53.860 "period_us": 100000, 00:13:53.860 "enable": false 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_malloc_create", 00:13:53.860 "params": { 00:13:53.860 "name": "malloc0", 00:13:53.860 "num_blocks": 8192, 00:13:53.860 "block_size": 4096, 00:13:53.860 "physical_block_size": 4096, 00:13:53.860 "uuid": "8a6212e1-46de-4ab3-996f-2ea130414214", 00:13:53.860 "optimal_io_boundary": 0, 00:13:53.860 "md_size": 0, 00:13:53.860 "dif_type": 0, 00:13:53.860 "dif_is_head_of_md": false, 00:13:53.860 "dif_pi_format": 0 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "bdev_wait_for_examine" 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "scsi", 00:13:53.860 "config": null 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "scheduler", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "framework_set_scheduler", 00:13:53.860 "params": { 00:13:53.860 "name": "static" 00:13:53.860 } 00:13:53.860 } 00:13:53.860 ] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "vhost_scsi", 00:13:53.860 "config": [] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "vhost_blk", 00:13:53.860 "config": [] 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "subsystem": "ublk", 00:13:53.860 "config": [ 00:13:53.860 { 00:13:53.860 "method": "ublk_create_target", 00:13:53.860 "params": { 00:13:53.860 "cpumask": "1" 00:13:53.860 } 00:13:53.860 }, 00:13:53.860 { 00:13:53.860 "method": "ublk_start_disk", 00:13:53.860 "params": { 00:13:53.860 "bdev_name": "malloc0", 00:13:53.860 "ublk_id": 0, 00:13:53.860 "num_queues": 1, 00:13:53.860 "queue_depth": 128 00:13:53.860 } 00:13:53.861 } 00:13:53.861 ] 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "subsystem": "nbd", 00:13:53.861 "config": [] 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "subsystem": "nvmf", 00:13:53.861 "config": [ 00:13:53.861 { 00:13:53.861 "method": "nvmf_set_config", 00:13:53.861 "params": { 00:13:53.861 "discovery_filter": "match_any", 00:13:53.861 "admin_cmd_passthru": { 00:13:53.861 "identify_ctrlr": false 00:13:53.861 }, 00:13:53.861 "dhchap_digests": [ 00:13:53.861 "sha256", 00:13:53.861 "sha384", 00:13:53.861 "sha512" 00:13:53.861 ], 00:13:53.861 "dhchap_dhgroups": [ 00:13:53.861 "null", 00:13:53.861 "ffdhe2048", 00:13:53.861 "ffdhe3072", 00:13:53.861 "ffdhe4096", 00:13:53.861 "ffdhe6144", 00:13:53.861 "ffdhe8192" 00:13:53.861 ] 00:13:53.861 } 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "method": "nvmf_set_max_subsystems", 00:13:53.861 "params": { 00:13:53.861 "max_subsystems": 1024 00:13:53.861 } 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "method": "nvmf_set_crdt", 00:13:53.861 "params": { 00:13:53.861 "crdt1": 0, 00:13:53.861 "crdt2": 0, 00:13:53.861 "crdt3": 0 00:13:53.861 } 00:13:53.861 } 00:13:53.861 ] 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "subsystem": "iscsi", 00:13:53.861 "config": [ 00:13:53.861 { 00:13:53.861 "method": "iscsi_set_options", 00:13:53.861 "params": { 00:13:53.861 "node_base": "iqn.2016-06.io.spdk", 00:13:53.861 "max_sessions": 128, 00:13:53.861 "max_connections_per_session": 2, 00:13:53.861 "max_queue_depth": 64, 00:13:53.861 "default_time2wait": 2, 00:13:53.861 "default_time2retain": 20, 00:13:53.861 "first_burst_length": 8192, 00:13:53.861 "immediate_data": true, 00:13:53.861 "allow_duplicated_isid": false, 00:13:53.861 "error_recovery_level": 0, 00:13:53.861 "nop_timeout": 60, 00:13:53.861 "nop_in_interval": 30, 00:13:53.861 "disable_chap": false, 00:13:53.861 "require_chap": false, 00:13:53.861 "mutual_chap": false, 00:13:53.861 "chap_group": 0, 00:13:53.861 "max_large_datain_per_connection": 64, 00:13:53.861 "max_r2t_per_connection": 4, 00:13:53.861 "pdu_pool_size": 36864, 00:13:53.861 "immediate_data_pool_size": 16384, 00:13:53.861 "data_out_pool_size": 2048 00:13:53.861 } 00:13:53.861 } 00:13:53.861 ] 00:13:53.861 } 00:13:53.861 ] 00:13:53.861 }' 00:13:53.861 20:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.861 [2024-11-19 20:03:27.491732] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:53.861 [2024-11-19 20:03:27.493345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:13:54.119 [2024-11-19 20:03:27.651417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.119 [2024-11-19 20:03:27.731207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.694 [2024-11-19 20:03:28.359237] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:54.694 [2024-11-19 20:03:28.359868] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:54.694 [2024-11-19 20:03:28.367323] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:54.694 [2024-11-19 20:03:28.367382] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:54.694 [2024-11-19 20:03:28.367390] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:54.694 [2024-11-19 20:03:28.367395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.694 [2024-11-19 20:03:28.376287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.694 [2024-11-19 20:03:28.376303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.694 [2024-11-19 20:03:28.383243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.694 [2024-11-19 20:03:28.383317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:54.694 [2024-11-19 20:03:28.400240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70771 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70771 ']' 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70771 00:13:54.694 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70771 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.961 killing process with pid 70771 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70771' 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70771 00:13:54.961 20:03:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70771 00:13:55.899 [2024-11-19 20:03:29.564012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.899 [2024-11-19 20:03:29.613308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.899 [2024-11-19 20:03:29.613401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.899 [2024-11-19 20:03:29.621247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.899 [2024-11-19 20:03:29.621287] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.899 [2024-11-19 20:03:29.621293] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.899 [2024-11-19 20:03:29.621313] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.899 [2024-11-19 20:03:29.621419] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:57.279 20:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:57.279 00:13:57.279 real 0m7.526s 00:13:57.279 user 0m4.955s 00:13:57.279 sys 0m3.208s 00:13:57.279 20:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.279 20:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.279 ************************************ 00:13:57.279 END TEST test_save_ublk_config 00:13:57.279 ************************************ 00:13:57.279 20:03:30 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70843 00:13:57.279 20:03:30 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.279 20:03:30 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70843 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@835 -- # '[' -z 70843 ']' 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.279 20:03:30 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.279 20:03:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.279 [2024-11-19 20:03:31.040019] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:57.279 [2024-11-19 20:03:31.040147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70843 ] 00:13:57.539 [2024-11-19 20:03:31.202657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:57.799 [2024-11-19 20:03:31.332541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.799 [2024-11-19 20:03:31.332657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.370 20:03:32 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.370 20:03:32 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:58.370 20:03:32 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:58.370 20:03:32 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.370 20:03:32 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.370 20:03:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.370 ************************************ 00:13:58.370 START TEST test_create_ublk 00:13:58.370 ************************************ 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:58.370 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.370 [2024-11-19 20:03:32.047254] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:58.370 [2024-11-19 20:03:32.049564] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.370 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:58.370 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.370 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.630 [2024-11-19 20:03:32.271419] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:58.630 [2024-11-19 20:03:32.271857] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:58.630 [2024-11-19 20:03:32.271885] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:58.630 [2024-11-19 20:03:32.271893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:58.630 [2024-11-19 20:03:32.280568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:58.630 [2024-11-19 20:03:32.280599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:58.630 [2024-11-19 20:03:32.287264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:58.630 [2024-11-19 20:03:32.299311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:58.630 [2024-11-19 20:03:32.313376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.630 20:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:58.630 { 00:13:58.630 "ublk_device": "/dev/ublkb0", 00:13:58.630 "id": 0, 00:13:58.630 "queue_depth": 512, 00:13:58.630 "num_queues": 4, 00:13:58.630 "bdev_name": "Malloc0" 00:13:58.630 } 00:13:58.630 ]' 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:58.630 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:58.889 20:03:32 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:58.889 20:03:32 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:58.890 fio: verification read phase will never start because write phase uses all of runtime 00:13:58.890 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:58.890 fio-3.35 00:13:58.890 Starting 1 process 00:14:11.081 00:14:11.081 fio_test: (groupid=0, jobs=1): err= 0: pid=70889: Tue Nov 19 20:03:42 2024 00:14:11.081 write: IOPS=19.3k, BW=75.5MiB/s (79.2MB/s)(755MiB/10001msec); 0 zone resets 00:14:11.081 clat (usec): min=34, max=4075, avg=50.90, stdev=86.21 00:14:11.081 lat (usec): min=35, max=4076, avg=51.37, stdev=86.23 00:14:11.081 clat percentiles (usec): 00:14:11.081 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:14:11.081 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:14:11.081 | 70.00th=[ 49], 80.00th=[ 51], 90.00th=[ 57], 95.00th=[ 64], 00:14:11.081 | 99.00th=[ 79], 99.50th=[ 89], 99.90th=[ 1532], 99.95th=[ 2507], 00:14:11.081 | 99.99th=[ 3458] 00:14:11.081 bw ( KiB/s): min=55392, max=82528, per=99.97%, avg=77312.84, stdev=6393.10, samples=19 00:14:11.082 iops : min=13848, max=20632, avg=19328.21, stdev=1598.28, samples=19 00:14:11.082 lat (usec) : 50=77.64%, 100=21.98%, 250=0.19%, 500=0.03%, 750=0.01% 00:14:11.082 lat (usec) : 1000=0.01% 00:14:11.082 lat (msec) : 2=0.05%, 4=0.08%, 10=0.01% 00:14:11.082 cpu : usr=3.36%, sys=15.56%, ctx=193352, majf=0, minf=795 00:14:11.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.082 issued rwts: total=0,193367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.082 00:14:11.082 Run status group 0 (all jobs): 00:14:11.082 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=755MiB (792MB), run=10001-10001msec 00:14:11.082 00:14:11.082 Disk stats (read/write): 00:14:11.082 ublkb0: ios=0/191397, merge=0/0, ticks=0/8052, in_queue=8052, util=98.85% 00:14:11.082 20:03:42 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:42.734060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.082 [2024-11-19 20:03:42.769583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.082 [2024-11-19 20:03:42.770586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.082 [2024-11-19 20:03:42.777250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.082 [2024-11-19 20:03:42.777471] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:11.082 [2024-11-19 20:03:42.777485] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:42 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:42.793296] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:11.082 request: 00:14:11.082 { 00:14:11.082 "ublk_id": 0, 00:14:11.082 "method": "ublk_stop_disk", 00:14:11.082 "req_id": 1 00:14:11.082 } 00:14:11.082 Got JSON-RPC error response 00:14:11.082 response: 00:14:11.082 { 00:14:11.082 "code": -19, 00:14:11.082 "message": "No such device" 00:14:11.082 } 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:11.082 20:03:42 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:42.809303] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:11.082 [2024-11-19 20:03:42.812847] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:11.082 [2024-11-19 20:03:42.812879] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:42 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:11.082 20:03:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:11.082 00:14:11.082 real 0m11.221s 00:14:11.082 user 0m0.640s 00:14:11.082 sys 0m1.639s 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 ************************************ 00:14:11.082 END TEST test_create_ublk 00:14:11.082 ************************************ 00:14:11.082 20:03:43 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:11.082 20:03:43 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.082 20:03:43 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.082 20:03:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 ************************************ 00:14:11.082 START TEST test_create_multi_ublk 00:14:11.082 ************************************ 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:43.312229] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:11.082 [2024-11-19 20:03:43.313775] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:43.540343] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:11.082 [2024-11-19 20:03:43.540636] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:11.082 [2024-11-19 20:03:43.540648] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.082 [2024-11-19 20:03:43.540656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.082 [2024-11-19 20:03:43.552269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.082 [2024-11-19 20:03:43.552290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.082 [2024-11-19 20:03:43.564250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.082 [2024-11-19 20:03:43.564737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:11.082 [2024-11-19 20:03:43.606242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.082 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.082 [2024-11-19 20:03:43.822338] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:11.082 [2024-11-19 20:03:43.822627] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:11.082 [2024-11-19 20:03:43.822640] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:11.083 [2024-11-19 20:03:43.822646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.083 [2024-11-19 20:03:43.830256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.083 [2024-11-19 20:03:43.830274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.083 [2024-11-19 20:03:43.838245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.083 [2024-11-19 20:03:43.838723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:11.083 [2024-11-19 20:03:43.855241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.083 20:03:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.083 [2024-11-19 20:03:44.014323] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:11.083 [2024-11-19 20:03:44.014618] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:11.083 [2024-11-19 20:03:44.014630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:11.083 [2024-11-19 20:03:44.014636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.083 [2024-11-19 20:03:44.022249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.083 [2024-11-19 20:03:44.022268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.083 [2024-11-19 20:03:44.030245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.083 [2024-11-19 20:03:44.030730] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:11.083 [2024-11-19 20:03:44.035031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.083 [2024-11-19 20:03:44.194343] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:11.083 [2024-11-19 20:03:44.194637] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:11.083 [2024-11-19 20:03:44.194650] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:11.083 [2024-11-19 20:03:44.194656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.083 [2024-11-19 20:03:44.202255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.083 [2024-11-19 20:03:44.202273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.083 [2024-11-19 20:03:44.210244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.083 [2024-11-19 20:03:44.210729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:11.083 [2024-11-19 20:03:44.219249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:11.083 { 00:14:11.083 "ublk_device": "/dev/ublkb0", 00:14:11.083 "id": 0, 00:14:11.083 "queue_depth": 512, 00:14:11.083 "num_queues": 4, 00:14:11.083 "bdev_name": "Malloc0" 00:14:11.083 }, 00:14:11.083 { 00:14:11.083 "ublk_device": "/dev/ublkb1", 00:14:11.083 "id": 1, 00:14:11.083 "queue_depth": 512, 00:14:11.083 "num_queues": 4, 00:14:11.083 "bdev_name": "Malloc1" 00:14:11.083 }, 00:14:11.083 { 00:14:11.083 "ublk_device": "/dev/ublkb2", 00:14:11.083 "id": 2, 00:14:11.083 "queue_depth": 512, 00:14:11.083 "num_queues": 4, 00:14:11.083 "bdev_name": "Malloc2" 00:14:11.083 }, 00:14:11.083 { 00:14:11.083 "ublk_device": "/dev/ublkb3", 00:14:11.083 "id": 3, 00:14:11.083 "queue_depth": 512, 00:14:11.083 "num_queues": 4, 00:14:11.083 "bdev_name": "Malloc3" 00:14:11.083 } 00:14:11.083 ]' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:11.083 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.341 [2024-11-19 20:03:44.890319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.341 [2024-11-19 20:03:44.927588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.341 [2024-11-19 20:03:44.928679] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.341 [2024-11-19 20:03:44.937243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.341 [2024-11-19 20:03:44.937466] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:11.341 [2024-11-19 20:03:44.937480] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.341 20:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.341 [2024-11-19 20:03:44.953289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.341 [2024-11-19 20:03:44.986665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.341 [2024-11-19 20:03:44.987652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.341 [2024-11-19 20:03:44.993243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.341 [2024-11-19 20:03:44.993461] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:11.341 [2024-11-19 20:03:44.993473] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.341 [2024-11-19 20:03:45.008322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.341 [2024-11-19 20:03:45.047666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.341 [2024-11-19 20:03:45.048666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.341 [2024-11-19 20:03:45.056259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.341 [2024-11-19 20:03:45.056470] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:11.341 [2024-11-19 20:03:45.056481] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.341 [2024-11-19 20:03:45.073299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.341 [2024-11-19 20:03:45.107267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.341 [2024-11-19 20:03:45.107840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.341 [2024-11-19 20:03:45.117280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.341 [2024-11-19 20:03:45.117493] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:11.341 [2024-11-19 20:03:45.117504] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.341 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:11.599 [2024-11-19 20:03:45.308285] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:11.599 [2024-11-19 20:03:45.311868] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:11.599 [2024-11-19 20:03:45.311896] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:11.599 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:11.599 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.599 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:11.599 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.599 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.163 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.163 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.163 20:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:12.163 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.163 20:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.421 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.421 20:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.421 20:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:12.421 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.421 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:12.679 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:12.936 00:14:12.936 real 0m3.217s 00:14:12.936 user 0m0.814s 00:14:12.936 sys 0m0.139s 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.936 20:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.936 ************************************ 00:14:12.936 END TEST test_create_multi_ublk 00:14:12.936 ************************************ 00:14:12.936 20:03:46 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:12.936 20:03:46 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:12.937 20:03:46 ublk -- ublk/ublk.sh@130 -- # killprocess 70843 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@954 -- # '[' -z 70843 ']' 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@958 -- # kill -0 70843 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@959 -- # uname 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70843 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.937 killing process with pid 70843 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70843' 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@973 -- # kill 70843 00:14:12.937 20:03:46 ublk -- common/autotest_common.sh@978 -- # wait 70843 00:14:13.501 [2024-11-19 20:03:47.100025] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.501 [2024-11-19 20:03:47.100074] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.069 00:14:14.069 real 0m24.512s 00:14:14.069 user 0m34.761s 00:14:14.069 sys 0m10.045s 00:14:14.069 20:03:47 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.069 ************************************ 00:14:14.069 END TEST ublk 00:14:14.069 ************************************ 00:14:14.069 20:03:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.069 20:03:47 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:14.069 20:03:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.069 20:03:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.069 20:03:47 -- common/autotest_common.sh@10 -- # set +x 00:14:14.069 ************************************ 00:14:14.069 START TEST ublk_recovery 00:14:14.069 ************************************ 00:14:14.069 20:03:47 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:14.069 * Looking for test storage... 00:14:14.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:14.330 20:03:47 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:14.330 20:03:47 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:14.330 20:03:47 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:14.330 20:03:47 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:14.330 20:03:47 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.330 20:03:47 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.331 20:03:47 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.331 --rc genhtml_branch_coverage=1 00:14:14.331 --rc genhtml_function_coverage=1 00:14:14.331 --rc genhtml_legend=1 00:14:14.331 --rc geninfo_all_blocks=1 00:14:14.331 --rc geninfo_unexecuted_blocks=1 00:14:14.331 00:14:14.331 ' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.331 --rc genhtml_branch_coverage=1 00:14:14.331 --rc genhtml_function_coverage=1 00:14:14.331 --rc genhtml_legend=1 00:14:14.331 --rc geninfo_all_blocks=1 00:14:14.331 --rc geninfo_unexecuted_blocks=1 00:14:14.331 00:14:14.331 ' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.331 --rc genhtml_branch_coverage=1 00:14:14.331 --rc genhtml_function_coverage=1 00:14:14.331 --rc genhtml_legend=1 00:14:14.331 --rc geninfo_all_blocks=1 00:14:14.331 --rc geninfo_unexecuted_blocks=1 00:14:14.331 00:14:14.331 ' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.331 --rc genhtml_branch_coverage=1 00:14:14.331 --rc genhtml_function_coverage=1 00:14:14.331 --rc genhtml_legend=1 00:14:14.331 --rc geninfo_all_blocks=1 00:14:14.331 --rc geninfo_unexecuted_blocks=1 00:14:14.331 00:14:14.331 ' 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:14.331 20:03:47 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71234 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71234 00:14:14.331 20:03:47 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71234 ']' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.331 20:03:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 [2024-11-19 20:03:48.042549] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:14.331 [2024-11-19 20:03:48.042696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71234 ] 00:14:14.590 [2024-11-19 20:03:48.203190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.590 [2024-11-19 20:03:48.291298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.590 [2024-11-19 20:03:48.291306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:15.155 20:03:48 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.155 [2024-11-19 20:03:48.870239] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.155 [2024-11-19 20:03:48.871752] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.155 20:03:48 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.155 20:03:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.155 malloc0 00:14:15.412 20:03:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.412 20:03:48 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:15.412 20:03:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.412 20:03:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.412 [2024-11-19 20:03:48.950565] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:15.412 [2024-11-19 20:03:48.950646] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:15.412 [2024-11-19 20:03:48.950655] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:15.412 [2024-11-19 20:03:48.950663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.412 [2024-11-19 20:03:48.959318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.412 [2024-11-19 20:03:48.959337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.412 [2024-11-19 20:03:48.966248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.412 [2024-11-19 20:03:48.966362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:15.412 [2024-11-19 20:03:48.987255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.412 1 00:14:15.412 20:03:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.412 20:03:48 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:16.342 20:03:49 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71269 00:14:16.342 20:03:49 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:16.342 20:03:49 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:16.342 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.342 fio-3.35 00:14:16.342 Starting 1 process 00:14:21.606 20:03:55 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71234 00:14:21.606 20:03:55 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:26.975 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71234 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:26.975 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71384 00:14:26.975 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.975 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71384 00:14:26.975 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71384 ']' 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.975 20:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 [2024-11-19 20:04:00.091490] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:26.975 [2024-11-19 20:04:00.091624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71384 ] 00:14:26.975 [2024-11-19 20:04:00.250901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.975 [2024-11-19 20:04:00.339489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.975 [2024-11-19 20:04:00.339695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:27.233 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.233 [2024-11-19 20:04:00.927240] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:27.233 [2024-11-19 20:04:00.928736] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.233 20:04:00 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.233 20:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.233 malloc0 00:14:27.233 20:04:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.233 20:04:01 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:27.233 20:04:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.233 20:04:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.233 [2024-11-19 20:04:01.007352] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:27.233 [2024-11-19 20:04:01.007385] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.233 [2024-11-19 20:04:01.007393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:27.233 [2024-11-19 20:04:01.015271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:27.233 [2024-11-19 20:04:01.015291] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:27.233 1 00:14:27.233 20:04:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.233 20:04:01 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71269 00:14:28.605 [2024-11-19 20:04:02.015318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:28.605 [2024-11-19 20:04:02.023245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:28.605 [2024-11-19 20:04:02.023263] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:29.539 [2024-11-19 20:04:03.023286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:29.539 [2024-11-19 20:04:03.027242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:29.539 [2024-11-19 20:04:03.027257] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.472 [2024-11-19 20:04:04.027274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:30.472 [2024-11-19 20:04:04.035239] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:30.472 [2024-11-19 20:04:04.035251] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.472 [2024-11-19 20:04:04.035259] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:30.472 [2024-11-19 20:04:04.035325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:52.393 [2024-11-19 20:04:25.090245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:52.393 [2024-11-19 20:04:25.096735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:52.393 [2024-11-19 20:04:25.104420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:52.393 [2024-11-19 20:04:25.104440] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:18.951 00:15:18.951 fio_test: (groupid=0, jobs=1): err= 0: pid=71272: Tue Nov 19 20:04:50 2024 00:15:18.951 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(3698MiB/60001msec) 00:15:18.951 slat (nsec): min=921, max=501821, avg=4824.88, stdev=1478.04 00:15:18.951 clat (usec): min=740, max=30112k, avg=4142.68, stdev=255165.80 00:15:18.951 lat (usec): min=746, max=30112k, avg=4147.51, stdev=255165.80 00:15:18.951 clat percentiles (usec): 00:15:18.951 | 1.00th=[ 1647], 5.00th=[ 1745], 10.00th=[ 1778], 20.00th=[ 1795], 00:15:18.951 | 30.00th=[ 1811], 40.00th=[ 1827], 50.00th=[ 1844], 60.00th=[ 1860], 00:15:18.951 | 70.00th=[ 1876], 80.00th=[ 1893], 90.00th=[ 2040], 95.00th=[ 2900], 00:15:18.951 | 99.00th=[ 4948], 99.50th=[ 5342], 99.90th=[ 6980], 99.95th=[ 8160], 00:15:18.951 | 99.99th=[12649] 00:15:18.951 bw ( KiB/s): min=13576, max=132736, per=100.00%, avg=124246.28, stdev=19439.35, samples=60 00:15:18.951 iops : min= 3394, max=33184, avg=31061.57, stdev=4859.84, samples=60 00:15:18.951 write: IOPS=15.8k, BW=61.6MiB/s (64.5MB/s)(3694MiB/60001msec); 0 zone resets 00:15:18.951 slat (nsec): min=1062, max=389809, avg=4848.35, stdev=1465.88 00:15:18.951 clat (usec): min=752, max=30112k, avg=3964.07, stdev=239833.81 00:15:18.951 lat (usec): min=757, max=30112k, avg=3968.92, stdev=239833.81 00:15:18.951 clat percentiles (usec): 00:15:18.951 | 1.00th=[ 1680], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1876], 00:15:18.951 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:15:18.951 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2073], 95.00th=[ 2835], 00:15:18.951 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[ 8225], 00:15:18.951 | 99.99th=[12780] 00:15:18.951 bw ( KiB/s): min=13616, max=131600, per=100.00%, avg=124043.27, stdev=19494.22, samples=60 00:15:18.951 iops : min= 3404, max=32900, avg=31010.80, stdev=4873.55, samples=60 00:15:18.951 lat (usec) : 750=0.01%, 1000=0.01% 00:15:18.951 lat (msec) : 2=87.19%, 4=10.24%, 10=2.54%, 20=0.03%, >=2000=0.01% 00:15:18.951 cpu : usr=3.52%, sys=15.69%, ctx=63838, majf=0, minf=13 00:15:18.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:18.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.951 issued rwts: total=946689,945537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.951 00:15:18.951 Run status group 0 (all jobs): 00:15:18.951 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=3698MiB (3878MB), run=60001-60001msec 00:15:18.951 WRITE: bw=61.6MiB/s (64.5MB/s), 61.6MiB/s-61.6MiB/s (64.5MB/s-64.5MB/s), io=3694MiB (3873MB), run=60001-60001msec 00:15:18.951 00:15:18.951 Disk stats (read/write): 00:15:18.951 ublkb1: ios=943238/941915, merge=0/0, ticks=3867361/3618340, in_queue=7485702, util=99.90% 00:15:18.951 20:04:50 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.951 [2024-11-19 20:04:50.253841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:18.951 [2024-11-19 20:04:50.282328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:18.951 [2024-11-19 20:04:50.282453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:18.951 [2024-11-19 20:04:50.289240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:18.951 [2024-11-19 20:04:50.289324] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:18.951 [2024-11-19 20:04:50.289330] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.951 20:04:50 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.951 [2024-11-19 20:04:50.305314] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.951 [2024-11-19 20:04:50.308896] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.951 [2024-11-19 20:04:50.308925] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.951 20:04:50 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:18.951 20:04:50 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:18.951 20:04:50 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71384 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71384 ']' 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71384 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71384 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.951 killing process with pid 71384 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71384' 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71384 00:15:18.951 20:04:50 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71384 00:15:18.951 [2024-11-19 20:04:51.362982] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.951 [2024-11-19 20:04:51.363026] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.951 00:15:18.951 real 1m4.264s 00:15:18.951 user 1m46.427s 00:15:18.951 sys 0m22.991s 00:15:18.951 20:04:52 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.951 ************************************ 00:15:18.951 END TEST ublk_recovery 00:15:18.951 ************************************ 00:15:18.951 20:04:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.951 20:04:52 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:18.951 20:04:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:18.951 20:04:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.951 20:04:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.951 20:04:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:18.951 20:04:52 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.951 20:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.951 20:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.951 20:04:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.951 ************************************ 00:15:18.951 START TEST ftl 00:15:18.951 ************************************ 00:15:18.951 20:04:52 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.951 * Looking for test storage... 00:15:18.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.951 20:04:52 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.951 20:04:52 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.951 20:04:52 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.951 20:04:52 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.951 20:04:52 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.951 20:04:52 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.951 20:04:52 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.951 20:04:52 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.951 20:04:52 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.951 20:04:52 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.951 20:04:52 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.951 20:04:52 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.951 20:04:52 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.951 20:04:52 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:18.951 20:04:52 ftl -- scripts/common.sh@345 -- # : 1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.951 20:04:52 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.951 20:04:52 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@353 -- # local d=1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.951 20:04:52 ftl -- scripts/common.sh@355 -- # echo 1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.951 20:04:52 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:18.952 20:04:52 ftl -- scripts/common.sh@353 -- # local d=2 00:15:18.952 20:04:52 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.952 20:04:52 ftl -- scripts/common.sh@355 -- # echo 2 00:15:18.952 20:04:52 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.952 20:04:52 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.952 20:04:52 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.952 20:04:52 ftl -- scripts/common.sh@368 -- # return 0 00:15:18.952 20:04:52 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.952 20:04:52 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 20:04:52 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 20:04:52 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 20:04:52 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.952 20:04:52 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.952 20:04:52 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.952 20:04:52 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.952 20:04:52 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.952 20:04:52 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.952 20:04:52 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.952 20:04:52 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.952 20:04:52 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.952 20:04:52 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.952 20:04:52 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.952 20:04:52 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.952 20:04:52 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.952 20:04:52 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.952 20:04:52 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.952 20:04:52 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.952 20:04:52 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.952 20:04:52 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.952 20:04:52 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.952 20:04:52 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.952 20:04:52 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.952 20:04:52 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.952 20:04:52 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.952 20:04:52 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:18.952 20:04:52 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:19.213 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:19.213 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:19.213 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:19.213 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:19.213 20:04:52 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72187 00:15:19.213 20:04:52 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72187 00:15:19.213 20:04:52 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@835 -- # '[' -z 72187 ']' 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.213 20:04:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 [2024-11-19 20:04:52.877052] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:19.213 [2024-11-19 20:04:52.877204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72187 ] 00:15:19.474 [2024-11-19 20:04:53.041538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.474 [2024-11-19 20:04:53.163446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.041 20:04:53 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.041 20:04:53 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:20.041 20:04:53 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:20.041 20:04:53 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:20.983 20:04:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:20.983 20:04:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:21.550 20:04:55 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:21.550 20:04:55 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.550 20:04:55 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@50 -- # break 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.551 20:04:55 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.808 20:04:55 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:21.808 20:04:55 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:21.808 20:04:55 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:21.808 20:04:55 ftl -- ftl/ftl.sh@63 -- # break 00:15:21.808 20:04:55 ftl -- ftl/ftl.sh@66 -- # killprocess 72187 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@954 -- # '[' -z 72187 ']' 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@958 -- # kill -0 72187 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@959 -- # uname 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72187 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.808 killing process with pid 72187 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72187' 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@973 -- # kill 72187 00:15:21.808 20:04:55 ftl -- common/autotest_common.sh@978 -- # wait 72187 00:15:23.185 20:04:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:23.185 20:04:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:23.185 20:04:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.185 20:04:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.185 20:04:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:23.185 ************************************ 00:15:23.185 START TEST ftl_fio_basic 00:15:23.185 ************************************ 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:23.185 * Looking for test storage... 00:15:23.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.185 --rc genhtml_branch_coverage=1 00:15:23.185 --rc genhtml_function_coverage=1 00:15:23.185 --rc genhtml_legend=1 00:15:23.185 --rc geninfo_all_blocks=1 00:15:23.185 --rc geninfo_unexecuted_blocks=1 00:15:23.185 00:15:23.185 ' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.185 --rc genhtml_branch_coverage=1 00:15:23.185 --rc genhtml_function_coverage=1 00:15:23.185 --rc genhtml_legend=1 00:15:23.185 --rc geninfo_all_blocks=1 00:15:23.185 --rc geninfo_unexecuted_blocks=1 00:15:23.185 00:15:23.185 ' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.185 --rc genhtml_branch_coverage=1 00:15:23.185 --rc genhtml_function_coverage=1 00:15:23.185 --rc genhtml_legend=1 00:15:23.185 --rc geninfo_all_blocks=1 00:15:23.185 --rc geninfo_unexecuted_blocks=1 00:15:23.185 00:15:23.185 ' 00:15:23.185 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.185 --rc genhtml_branch_coverage=1 00:15:23.185 --rc genhtml_function_coverage=1 00:15:23.185 --rc genhtml_legend=1 00:15:23.185 --rc geninfo_all_blocks=1 00:15:23.185 --rc geninfo_unexecuted_blocks=1 00:15:23.185 00:15:23.186 ' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72319 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72319 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72319 ']' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:23.186 20:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:23.186 [2024-11-19 20:04:56.915988] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:23.186 [2024-11-19 20:04:56.916138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72319 ] 00:15:23.445 [2024-11-19 20:04:57.080924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.445 [2024-11-19 20:04:57.212289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.445 [2024-11-19 20:04:57.212473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.445 [2024-11-19 20:04:57.212540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:24.012 20:04:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:24.270 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:24.530 { 00:15:24.530 "name": "nvme0n1", 00:15:24.530 "aliases": [ 00:15:24.530 "4a47aa74-5f14-42df-9f19-5f8421bafbeb" 00:15:24.530 ], 00:15:24.530 "product_name": "NVMe disk", 00:15:24.530 "block_size": 4096, 00:15:24.530 "num_blocks": 1310720, 00:15:24.530 "uuid": "4a47aa74-5f14-42df-9f19-5f8421bafbeb", 00:15:24.530 "numa_id": -1, 00:15:24.530 "assigned_rate_limits": { 00:15:24.530 "rw_ios_per_sec": 0, 00:15:24.530 "rw_mbytes_per_sec": 0, 00:15:24.530 "r_mbytes_per_sec": 0, 00:15:24.530 "w_mbytes_per_sec": 0 00:15:24.530 }, 00:15:24.530 "claimed": false, 00:15:24.530 "zoned": false, 00:15:24.530 "supported_io_types": { 00:15:24.530 "read": true, 00:15:24.530 "write": true, 00:15:24.530 "unmap": true, 00:15:24.530 "flush": true, 00:15:24.530 "reset": true, 00:15:24.530 "nvme_admin": true, 00:15:24.530 "nvme_io": true, 00:15:24.530 "nvme_io_md": false, 00:15:24.530 "write_zeroes": true, 00:15:24.530 "zcopy": false, 00:15:24.530 "get_zone_info": false, 00:15:24.530 "zone_management": false, 00:15:24.530 "zone_append": false, 00:15:24.530 "compare": true, 00:15:24.530 "compare_and_write": false, 00:15:24.530 "abort": true, 00:15:24.530 "seek_hole": false, 00:15:24.530 "seek_data": false, 00:15:24.530 "copy": true, 00:15:24.530 "nvme_iov_md": false 00:15:24.530 }, 00:15:24.530 "driver_specific": { 00:15:24.530 "nvme": [ 00:15:24.530 { 00:15:24.530 "pci_address": "0000:00:11.0", 00:15:24.530 "trid": { 00:15:24.530 "trtype": "PCIe", 00:15:24.530 "traddr": "0000:00:11.0" 00:15:24.530 }, 00:15:24.530 "ctrlr_data": { 00:15:24.530 "cntlid": 0, 00:15:24.530 "vendor_id": "0x1b36", 00:15:24.530 "model_number": "QEMU NVMe Ctrl", 00:15:24.530 "serial_number": "12341", 00:15:24.530 "firmware_revision": "8.0.0", 00:15:24.530 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:24.530 "oacs": { 00:15:24.530 "security": 0, 00:15:24.530 "format": 1, 00:15:24.530 "firmware": 0, 00:15:24.530 "ns_manage": 1 00:15:24.530 }, 00:15:24.530 "multi_ctrlr": false, 00:15:24.530 "ana_reporting": false 00:15:24.530 }, 00:15:24.530 "vs": { 00:15:24.530 "nvme_version": "1.4" 00:15:24.530 }, 00:15:24.530 "ns_data": { 00:15:24.530 "id": 1, 00:15:24.530 "can_share": false 00:15:24.530 } 00:15:24.530 } 00:15:24.530 ], 00:15:24.530 "mp_policy": "active_passive" 00:15:24.530 } 00:15:24.530 } 00:15:24.530 ]' 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:24.530 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:24.789 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:24.789 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:25.048 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0f4abe60-61ac-4519-9078-023f8313270f 00:15:25.048 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f4abe60-61ac-4519-9078-023f8313270f 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:25.306 20:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.306 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:25.306 { 00:15:25.306 "name": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:25.306 "aliases": [ 00:15:25.306 "lvs/nvme0n1p0" 00:15:25.306 ], 00:15:25.306 "product_name": "Logical Volume", 00:15:25.306 "block_size": 4096, 00:15:25.306 "num_blocks": 26476544, 00:15:25.306 "uuid": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:25.306 "assigned_rate_limits": { 00:15:25.306 "rw_ios_per_sec": 0, 00:15:25.306 "rw_mbytes_per_sec": 0, 00:15:25.306 "r_mbytes_per_sec": 0, 00:15:25.306 "w_mbytes_per_sec": 0 00:15:25.306 }, 00:15:25.306 "claimed": false, 00:15:25.306 "zoned": false, 00:15:25.306 "supported_io_types": { 00:15:25.306 "read": true, 00:15:25.306 "write": true, 00:15:25.306 "unmap": true, 00:15:25.306 "flush": false, 00:15:25.306 "reset": true, 00:15:25.306 "nvme_admin": false, 00:15:25.306 "nvme_io": false, 00:15:25.306 "nvme_io_md": false, 00:15:25.306 "write_zeroes": true, 00:15:25.306 "zcopy": false, 00:15:25.306 "get_zone_info": false, 00:15:25.306 "zone_management": false, 00:15:25.306 "zone_append": false, 00:15:25.306 "compare": false, 00:15:25.306 "compare_and_write": false, 00:15:25.306 "abort": false, 00:15:25.306 "seek_hole": true, 00:15:25.306 "seek_data": true, 00:15:25.306 "copy": false, 00:15:25.306 "nvme_iov_md": false 00:15:25.306 }, 00:15:25.306 "driver_specific": { 00:15:25.306 "lvol": { 00:15:25.306 "lvol_store_uuid": "0f4abe60-61ac-4519-9078-023f8313270f", 00:15:25.306 "base_bdev": "nvme0n1", 00:15:25.306 "thin_provision": true, 00:15:25.306 "num_allocated_clusters": 0, 00:15:25.306 "snapshot": false, 00:15:25.307 "clone": false, 00:15:25.307 "esnap_clone": false 00:15:25.307 } 00:15:25.307 } 00:15:25.307 } 00:15:25.307 ]' 00:15:25.307 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:25.307 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:25.307 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:25.564 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5c19515-739c-4a23-b603-5e65a34163d2 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:25.823 { 00:15:25.823 "name": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:25.823 "aliases": [ 00:15:25.823 "lvs/nvme0n1p0" 00:15:25.823 ], 00:15:25.823 "product_name": "Logical Volume", 00:15:25.823 "block_size": 4096, 00:15:25.823 "num_blocks": 26476544, 00:15:25.823 "uuid": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:25.823 "assigned_rate_limits": { 00:15:25.823 "rw_ios_per_sec": 0, 00:15:25.823 "rw_mbytes_per_sec": 0, 00:15:25.823 "r_mbytes_per_sec": 0, 00:15:25.823 "w_mbytes_per_sec": 0 00:15:25.823 }, 00:15:25.823 "claimed": false, 00:15:25.823 "zoned": false, 00:15:25.823 "supported_io_types": { 00:15:25.823 "read": true, 00:15:25.823 "write": true, 00:15:25.823 "unmap": true, 00:15:25.823 "flush": false, 00:15:25.823 "reset": true, 00:15:25.823 "nvme_admin": false, 00:15:25.823 "nvme_io": false, 00:15:25.823 "nvme_io_md": false, 00:15:25.823 "write_zeroes": true, 00:15:25.823 "zcopy": false, 00:15:25.823 "get_zone_info": false, 00:15:25.823 "zone_management": false, 00:15:25.823 "zone_append": false, 00:15:25.823 "compare": false, 00:15:25.823 "compare_and_write": false, 00:15:25.823 "abort": false, 00:15:25.823 "seek_hole": true, 00:15:25.823 "seek_data": true, 00:15:25.823 "copy": false, 00:15:25.823 "nvme_iov_md": false 00:15:25.823 }, 00:15:25.823 "driver_specific": { 00:15:25.823 "lvol": { 00:15:25.823 "lvol_store_uuid": "0f4abe60-61ac-4519-9078-023f8313270f", 00:15:25.823 "base_bdev": "nvme0n1", 00:15:25.823 "thin_provision": true, 00:15:25.823 "num_allocated_clusters": 0, 00:15:25.823 "snapshot": false, 00:15:25.823 "clone": false, 00:15:25.823 "esnap_clone": false 00:15:25.823 } 00:15:25.823 } 00:15:25.823 } 00:15:25.823 ]' 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:25.823 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:26.081 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:26.082 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size c5c19515-739c-4a23-b603-5e65a34163d2 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c5c19515-739c-4a23-b603-5e65a34163d2 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:26.082 20:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5c19515-739c-4a23-b603-5e65a34163d2 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:26.340 { 00:15:26.340 "name": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:26.340 "aliases": [ 00:15:26.340 "lvs/nvme0n1p0" 00:15:26.340 ], 00:15:26.340 "product_name": "Logical Volume", 00:15:26.340 "block_size": 4096, 00:15:26.340 "num_blocks": 26476544, 00:15:26.340 "uuid": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:26.340 "assigned_rate_limits": { 00:15:26.340 "rw_ios_per_sec": 0, 00:15:26.340 "rw_mbytes_per_sec": 0, 00:15:26.340 "r_mbytes_per_sec": 0, 00:15:26.340 "w_mbytes_per_sec": 0 00:15:26.340 }, 00:15:26.340 "claimed": false, 00:15:26.340 "zoned": false, 00:15:26.340 "supported_io_types": { 00:15:26.340 "read": true, 00:15:26.340 "write": true, 00:15:26.340 "unmap": true, 00:15:26.340 "flush": false, 00:15:26.340 "reset": true, 00:15:26.340 "nvme_admin": false, 00:15:26.340 "nvme_io": false, 00:15:26.340 "nvme_io_md": false, 00:15:26.340 "write_zeroes": true, 00:15:26.340 "zcopy": false, 00:15:26.340 "get_zone_info": false, 00:15:26.340 "zone_management": false, 00:15:26.340 "zone_append": false, 00:15:26.340 "compare": false, 00:15:26.340 "compare_and_write": false, 00:15:26.340 "abort": false, 00:15:26.340 "seek_hole": true, 00:15:26.340 "seek_data": true, 00:15:26.340 "copy": false, 00:15:26.340 "nvme_iov_md": false 00:15:26.340 }, 00:15:26.340 "driver_specific": { 00:15:26.340 "lvol": { 00:15:26.340 "lvol_store_uuid": "0f4abe60-61ac-4519-9078-023f8313270f", 00:15:26.340 "base_bdev": "nvme0n1", 00:15:26.340 "thin_provision": true, 00:15:26.340 "num_allocated_clusters": 0, 00:15:26.340 "snapshot": false, 00:15:26.340 "clone": false, 00:15:26.340 "esnap_clone": false 00:15:26.340 } 00:15:26.340 } 00:15:26.340 } 00:15:26.340 ]' 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:26.340 20:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c5c19515-739c-4a23-b603-5e65a34163d2 -c nvc0n1p0 --l2p_dram_limit 60 00:15:26.600 [2024-11-19 20:05:00.284107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.284143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:26.600 [2024-11-19 20:05:00.284156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:26.600 [2024-11-19 20:05:00.284163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.284216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.284235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:26.600 [2024-11-19 20:05:00.284243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:26.600 [2024-11-19 20:05:00.284249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.284277] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:26.600 [2024-11-19 20:05:00.284851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:26.600 [2024-11-19 20:05:00.284867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.284873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:26.600 [2024-11-19 20:05:00.284881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:15:26.600 [2024-11-19 20:05:00.284887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.284945] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4085363a-5e21-40de-b2ee-63b845c24310 00:15:26.600 [2024-11-19 20:05:00.285932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.285951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:26.600 [2024-11-19 20:05:00.285959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:26.600 [2024-11-19 20:05:00.285966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.290723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.290754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:26.600 [2024-11-19 20:05:00.290762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:15:26.600 [2024-11-19 20:05:00.290769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.290847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.290856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:26.600 [2024-11-19 20:05:00.290863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:15:26.600 [2024-11-19 20:05:00.290873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.290919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.290927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:26.600 [2024-11-19 20:05:00.290934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:26.600 [2024-11-19 20:05:00.290941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.290964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:26.600 [2024-11-19 20:05:00.293851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.293875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:26.600 [2024-11-19 20:05:00.293885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:15:26.600 [2024-11-19 20:05:00.293893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.293922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.293928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:26.600 [2024-11-19 20:05:00.293935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:26.600 [2024-11-19 20:05:00.293941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.293959] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:26.600 [2024-11-19 20:05:00.294075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:26.600 [2024-11-19 20:05:00.294087] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:26.600 [2024-11-19 20:05:00.294095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:26.600 [2024-11-19 20:05:00.294104] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294111] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:26.600 [2024-11-19 20:05:00.294124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:26.600 [2024-11-19 20:05:00.294131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:26.600 [2024-11-19 20:05:00.294136] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:26.600 [2024-11-19 20:05:00.294143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.294151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:26.600 [2024-11-19 20:05:00.294159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:15:26.600 [2024-11-19 20:05:00.294165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.294249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.600 [2024-11-19 20:05:00.294256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:26.600 [2024-11-19 20:05:00.294263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:26.600 [2024-11-19 20:05:00.294269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.600 [2024-11-19 20:05:00.294356] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:26.600 [2024-11-19 20:05:00.294363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:26.600 [2024-11-19 20:05:00.294372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:26.600 [2024-11-19 20:05:00.294391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:26.600 [2024-11-19 20:05:00.294409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.600 [2024-11-19 20:05:00.294420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:26.600 [2024-11-19 20:05:00.294425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:26.600 [2024-11-19 20:05:00.294431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.600 [2024-11-19 20:05:00.294437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:26.600 [2024-11-19 20:05:00.294443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:26.600 [2024-11-19 20:05:00.294448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:26.600 [2024-11-19 20:05:00.294462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:26.600 [2024-11-19 20:05:00.294483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:26.600 [2024-11-19 20:05:00.294499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:26.600 [2024-11-19 20:05:00.294516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:26.600 [2024-11-19 20:05:00.294532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.600 [2024-11-19 20:05:00.294543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:26.600 [2024-11-19 20:05:00.294550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.600 [2024-11-19 20:05:00.294562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:26.600 [2024-11-19 20:05:00.294576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:26.600 [2024-11-19 20:05:00.294582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.600 [2024-11-19 20:05:00.294587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:26.600 [2024-11-19 20:05:00.294593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:26.600 [2024-11-19 20:05:00.294598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.600 [2024-11-19 20:05:00.294604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:26.601 [2024-11-19 20:05:00.294609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:26.601 [2024-11-19 20:05:00.294616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.601 [2024-11-19 20:05:00.294621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:26.601 [2024-11-19 20:05:00.294628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:26.601 [2024-11-19 20:05:00.294633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.601 [2024-11-19 20:05:00.294640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.601 [2024-11-19 20:05:00.294645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:26.601 [2024-11-19 20:05:00.294653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:26.601 [2024-11-19 20:05:00.294658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:26.601 [2024-11-19 20:05:00.294664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:26.601 [2024-11-19 20:05:00.294672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:26.601 [2024-11-19 20:05:00.294679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:26.601 [2024-11-19 20:05:00.294686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:26.601 [2024-11-19 20:05:00.294694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:26.601 [2024-11-19 20:05:00.294708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:26.601 [2024-11-19 20:05:00.294714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:26.601 [2024-11-19 20:05:00.294721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:26.601 [2024-11-19 20:05:00.294726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:26.601 [2024-11-19 20:05:00.294733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:26.601 [2024-11-19 20:05:00.294738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:26.601 [2024-11-19 20:05:00.294744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:26.601 [2024-11-19 20:05:00.294750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:26.601 [2024-11-19 20:05:00.294758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:26.601 [2024-11-19 20:05:00.294788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:26.601 [2024-11-19 20:05:00.294795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:26.601 [2024-11-19 20:05:00.294809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:26.601 [2024-11-19 20:05:00.294815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:26.601 [2024-11-19 20:05:00.294821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:26.601 [2024-11-19 20:05:00.294827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.601 [2024-11-19 20:05:00.294833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:26.601 [2024-11-19 20:05:00.294839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:15:26.601 [2024-11-19 20:05:00.294846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.601 [2024-11-19 20:05:00.294908] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:26.601 [2024-11-19 20:05:00.294919] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:29.887 [2024-11-19 20:05:03.232898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.887 [2024-11-19 20:05:03.233190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:29.887 [2024-11-19 20:05:03.233214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2937.976 ms 00:15:29.887 [2024-11-19 20:05:03.233243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.887 [2024-11-19 20:05:03.258068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.887 [2024-11-19 20:05:03.258112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:29.887 [2024-11-19 20:05:03.258125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.618 ms 00:15:29.887 [2024-11-19 20:05:03.258134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.258273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.258287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:29.888 [2024-11-19 20:05:03.258296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:15:29.888 [2024-11-19 20:05:03.258307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.302594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.302668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:29.888 [2024-11-19 20:05:03.302698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.229 ms 00:15:29.888 [2024-11-19 20:05:03.302720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.302791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.302814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:29.888 [2024-11-19 20:05:03.302831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:29.888 [2024-11-19 20:05:03.302848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.303365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.303418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:29.888 [2024-11-19 20:05:03.303437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:15:29.888 [2024-11-19 20:05:03.303459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.303696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.303723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:29.888 [2024-11-19 20:05:03.303741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:15:29.888 [2024-11-19 20:05:03.303762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.320429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.320568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:29.888 [2024-11-19 20:05:03.320584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.628 ms 00:15:29.888 [2024-11-19 20:05:03.320593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.331833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:29.888 [2024-11-19 20:05:03.345543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.345586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:29.888 [2024-11-19 20:05:03.345597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.858 ms 00:15:29.888 [2024-11-19 20:05:03.345607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.400564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.400609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:29.888 [2024-11-19 20:05:03.400628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.921 ms 00:15:29.888 [2024-11-19 20:05:03.400636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.400824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.400836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:29.888 [2024-11-19 20:05:03.400848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:15:29.888 [2024-11-19 20:05:03.400856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.423857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.423993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:29.888 [2024-11-19 20:05:03.424013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.944 ms 00:15:29.888 [2024-11-19 20:05:03.424021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.446490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.446607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:29.888 [2024-11-19 20:05:03.446626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.430 ms 00:15:29.888 [2024-11-19 20:05:03.446633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.447204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.447233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:29.888 [2024-11-19 20:05:03.447244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:15:29.888 [2024-11-19 20:05:03.447252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.512681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.512728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:29.888 [2024-11-19 20:05:03.512744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.386 ms 00:15:29.888 [2024-11-19 20:05:03.512754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.537110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.537258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:29.888 [2024-11-19 20:05:03.537278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.270 ms 00:15:29.888 [2024-11-19 20:05:03.537286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.559945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.559977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:29.888 [2024-11-19 20:05:03.559989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.613 ms 00:15:29.888 [2024-11-19 20:05:03.559997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.583102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.583137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:29.888 [2024-11-19 20:05:03.583149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.065 ms 00:15:29.888 [2024-11-19 20:05:03.583157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.583202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.583212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:29.888 [2024-11-19 20:05:03.583240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:29.888 [2024-11-19 20:05:03.583250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.583328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-19 20:05:03.583337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:29.888 [2024-11-19 20:05:03.583347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:29.888 [2024-11-19 20:05:03.583355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-19 20:05:03.584302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3299.740 ms, result 0 00:15:29.888 { 00:15:29.888 "name": "ftl0", 00:15:29.888 "uuid": "4085363a-5e21-40de-b2ee-63b845c24310" 00:15:29.888 } 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.888 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.147 20:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:30.405 [ 00:15:30.405 { 00:15:30.405 "name": "ftl0", 00:15:30.405 "aliases": [ 00:15:30.405 "4085363a-5e21-40de-b2ee-63b845c24310" 00:15:30.405 ], 00:15:30.405 "product_name": "FTL disk", 00:15:30.405 "block_size": 4096, 00:15:30.405 "num_blocks": 20971520, 00:15:30.405 "uuid": "4085363a-5e21-40de-b2ee-63b845c24310", 00:15:30.405 "assigned_rate_limits": { 00:15:30.405 "rw_ios_per_sec": 0, 00:15:30.405 "rw_mbytes_per_sec": 0, 00:15:30.405 "r_mbytes_per_sec": 0, 00:15:30.405 "w_mbytes_per_sec": 0 00:15:30.405 }, 00:15:30.405 "claimed": false, 00:15:30.405 "zoned": false, 00:15:30.405 "supported_io_types": { 00:15:30.405 "read": true, 00:15:30.405 "write": true, 00:15:30.405 "unmap": true, 00:15:30.405 "flush": true, 00:15:30.405 "reset": false, 00:15:30.405 "nvme_admin": false, 00:15:30.405 "nvme_io": false, 00:15:30.405 "nvme_io_md": false, 00:15:30.405 "write_zeroes": true, 00:15:30.405 "zcopy": false, 00:15:30.405 "get_zone_info": false, 00:15:30.405 "zone_management": false, 00:15:30.405 "zone_append": false, 00:15:30.405 "compare": false, 00:15:30.405 "compare_and_write": false, 00:15:30.405 "abort": false, 00:15:30.405 "seek_hole": false, 00:15:30.405 "seek_data": false, 00:15:30.405 "copy": false, 00:15:30.405 "nvme_iov_md": false 00:15:30.405 }, 00:15:30.405 "driver_specific": { 00:15:30.405 "ftl": { 00:15:30.405 "base_bdev": "c5c19515-739c-4a23-b603-5e65a34163d2", 00:15:30.405 "cache": "nvc0n1p0" 00:15:30.405 } 00:15:30.405 } 00:15:30.405 } 00:15:30.405 ] 00:15:30.405 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:30.405 20:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:30.405 20:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:30.664 20:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:30.664 20:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:30.664 [2024-11-19 20:05:04.397002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.397049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:30.664 [2024-11-19 20:05:04.397062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:30.664 [2024-11-19 20:05:04.397078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.397114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:30.664 [2024-11-19 20:05:04.399703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.399838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:30.664 [2024-11-19 20:05:04.399859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:15:30.664 [2024-11-19 20:05:04.399867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.400300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.400316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:30.664 [2024-11-19 20:05:04.400326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:15:30.664 [2024-11-19 20:05:04.400334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.403579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.403600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:30.664 [2024-11-19 20:05:04.403610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:15:30.664 [2024-11-19 20:05:04.403618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.409823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.409847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:30.664 [2024-11-19 20:05:04.409859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.180 ms 00:15:30.664 [2024-11-19 20:05:04.409867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.433381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.433414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:30.664 [2024-11-19 20:05:04.433426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.438 ms 00:15:30.664 [2024-11-19 20:05:04.433434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.448012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.448045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:30.664 [2024-11-19 20:05:04.448058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.522 ms 00:15:30.664 [2024-11-19 20:05:04.448068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.664 [2024-11-19 20:05:04.448274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.664 [2024-11-19 20:05:04.448286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:30.664 [2024-11-19 20:05:04.448296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:15:30.664 [2024-11-19 20:05:04.448303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.924 [2024-11-19 20:05:04.471328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.924 [2024-11-19 20:05:04.471357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:30.924 [2024-11-19 20:05:04.471369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.997 ms 00:15:30.924 [2024-11-19 20:05:04.471377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.924 [2024-11-19 20:05:04.493734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.924 [2024-11-19 20:05:04.493764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:30.924 [2024-11-19 20:05:04.493776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.311 ms 00:15:30.924 [2024-11-19 20:05:04.493783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.924 [2024-11-19 20:05:04.515907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.924 [2024-11-19 20:05:04.515936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:30.924 [2024-11-19 20:05:04.515948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.082 ms 00:15:30.924 [2024-11-19 20:05:04.515955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.924 [2024-11-19 20:05:04.538643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.924 [2024-11-19 20:05:04.538673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:30.924 [2024-11-19 20:05:04.538684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:15:30.924 [2024-11-19 20:05:04.538692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.924 [2024-11-19 20:05:04.538729] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:30.924 [2024-11-19 20:05:04.538743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:30.924 [2024-11-19 20:05:04.538880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.538995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:30.925 [2024-11-19 20:05:04.539630] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:30.925 [2024-11-19 20:05:04.539639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4085363a-5e21-40de-b2ee-63b845c24310 00:15:30.926 [2024-11-19 20:05:04.539647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:30.926 [2024-11-19 20:05:04.539657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:30.926 [2024-11-19 20:05:04.539663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:30.926 [2024-11-19 20:05:04.539674] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:30.926 [2024-11-19 20:05:04.539681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:30.926 [2024-11-19 20:05:04.539689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:30.926 [2024-11-19 20:05:04.539697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:30.926 [2024-11-19 20:05:04.539704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:30.926 [2024-11-19 20:05:04.539710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:30.926 [2024-11-19 20:05:04.539721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.926 [2024-11-19 20:05:04.539728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:30.926 [2024-11-19 20:05:04.539737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:15:30.926 [2024-11-19 20:05:04.539744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.552066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.926 [2024-11-19 20:05:04.552096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:30.926 [2024-11-19 20:05:04.552108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.288 ms 00:15:30.926 [2024-11-19 20:05:04.552116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.552504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.926 [2024-11-19 20:05:04.552518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:30.926 [2024-11-19 20:05:04.552528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:15:30.926 [2024-11-19 20:05:04.552535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.595801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.926 [2024-11-19 20:05:04.595840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:30.926 [2024-11-19 20:05:04.595852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.926 [2024-11-19 20:05:04.595859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.595923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.926 [2024-11-19 20:05:04.595931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:30.926 [2024-11-19 20:05:04.595940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.926 [2024-11-19 20:05:04.595948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.596041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.926 [2024-11-19 20:05:04.596051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:30.926 [2024-11-19 20:05:04.596063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.926 [2024-11-19 20:05:04.596070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.596096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.926 [2024-11-19 20:05:04.596104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:30.926 [2024-11-19 20:05:04.596113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.926 [2024-11-19 20:05:04.596120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.926 [2024-11-19 20:05:04.675183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.926 [2024-11-19 20:05:04.675243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:30.926 [2024-11-19 20:05:04.675256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.926 [2024-11-19 20:05:04.675264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.736653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.736698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:31.186 [2024-11-19 20:05:04.736710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.736718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.736791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.736801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:31.186 [2024-11-19 20:05:04.736810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.736820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.736889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.736898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:31.186 [2024-11-19 20:05:04.736908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.736915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.737013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.737023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:31.186 [2024-11-19 20:05:04.737032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.737039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.737099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.737109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:31.186 [2024-11-19 20:05:04.737118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.737126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.737167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.737175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:31.186 [2024-11-19 20:05:04.737184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.737192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.737269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:31.186 [2024-11-19 20:05:04.737279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:31.186 [2024-11-19 20:05:04.737288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:31.186 [2024-11-19 20:05:04.737295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:31.186 [2024-11-19 20:05:04.737438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.412 ms, result 0 00:15:31.186 true 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72319 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72319 ']' 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72319 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72319 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72319' 00:15:31.186 killing process with pid 72319 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72319 00:15:31.186 20:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72319 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:37.767 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.768 20:05:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:37.768 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:37.768 fio-3.35 00:15:37.768 Starting 1 thread 00:15:41.121 00:15:41.121 test: (groupid=0, jobs=1): err= 0: pid=72505: Tue Nov 19 20:05:14 2024 00:15:41.121 read: IOPS=1353, BW=89.9MiB/s (94.2MB/s)(255MiB/2832msec) 00:15:41.121 slat (nsec): min=3954, max=87283, avg=5301.73, stdev=2121.11 00:15:41.121 clat (usec): min=246, max=1167, avg=332.07, stdev=55.52 00:15:41.121 lat (usec): min=251, max=1172, avg=337.37, stdev=56.18 00:15:41.121 clat percentiles (usec): 00:15:41.121 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 314], 00:15:41.121 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 322], 00:15:41.121 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 433], 00:15:41.121 | 99.00th=[ 570], 99.50th=[ 652], 99.90th=[ 914], 99.95th=[ 955], 00:15:41.121 | 99.99th=[ 1172] 00:15:41.121 write: IOPS=1363, BW=90.5MiB/s (94.9MB/s)(256MiB/2829msec); 0 zone resets 00:15:41.121 slat (nsec): min=14811, max=68386, avg=18486.54, stdev=2706.69 00:15:41.121 clat (usec): min=289, max=1281, avg=366.84, stdev=78.53 00:15:41.121 lat (usec): min=307, max=1299, avg=385.33, stdev=79.00 00:15:41.121 clat percentiles (usec): 00:15:41.121 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 338], 00:15:41.121 | 30.00th=[ 343], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:15:41.121 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 506], 00:15:41.121 | 99.00th=[ 693], 99.50th=[ 758], 99.90th=[ 1029], 99.95th=[ 1074], 00:15:41.121 | 99.99th=[ 1287] 00:15:41.121 bw ( KiB/s): min=89760, max=95744, per=99.98%, avg=92670.40, stdev=2358.73, samples=5 00:15:41.121 iops : min= 1320, max= 1408, avg=1362.80, stdev=34.69, samples=5 00:15:41.121 lat (usec) : 250=0.01%, 500=96.32%, 750=3.30%, 1000=0.29% 00:15:41.121 lat (msec) : 2=0.08% 00:15:41.121 cpu : usr=99.26%, sys=0.07%, ctx=4, majf=0, minf=1169 00:15:41.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.121 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.121 00:15:41.121 Run status group 0 (all jobs): 00:15:41.121 READ: bw=89.9MiB/s (94.2MB/s), 89.9MiB/s-89.9MiB/s (94.2MB/s-94.2MB/s), io=255MiB (267MB), run=2832-2832msec 00:15:41.121 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=256MiB (269MB), run=2829-2829msec 00:15:42.062 ----------------------------------------------------- 00:15:42.062 Suppressions used: 00:15:42.062 count bytes template 00:15:42.062 1 5 /usr/src/fio/parse.c 00:15:42.062 1 8 libtcmalloc_minimal.so 00:15:42.062 1 904 libcrypto.so 00:15:42.062 ----------------------------------------------------- 00:15:42.062 00:15:42.062 20:05:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:42.062 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:42.062 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.322 20:05:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.322 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:42.322 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:42.322 fio-3.35 00:15:42.322 Starting 2 threads 00:16:08.896 00:16:08.896 first_half: (groupid=0, jobs=1): err= 0: pid=72591: Tue Nov 19 20:05:39 2024 00:16:08.896 read: IOPS=2908, BW=11.4MiB/s (11.9MB/s)(255MiB/22430msec) 00:16:08.896 slat (nsec): min=2962, max=49388, avg=5090.13, stdev=1055.35 00:16:08.896 clat (usec): min=594, max=305553, avg=34548.38, stdev=17291.67 00:16:08.896 lat (usec): min=600, max=305558, avg=34553.47, stdev=17291.74 00:16:08.896 clat percentiles (msec): 00:16:08.896 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:16:08.896 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:08.896 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:16:08.896 | 99.00th=[ 128], 99.50th=[ 148], 99.90th=[ 201], 99.95th=[ 255], 00:16:08.896 | 99.99th=[ 300] 00:16:08.896 write: IOPS=3501, BW=13.7MiB/s (14.3MB/s)(256MiB/18719msec); 0 zone resets 00:16:08.896 slat (usec): min=3, max=1544, avg= 6.66, stdev= 7.87 00:16:08.896 clat (usec): min=396, max=74442, avg=9370.12, stdev=15448.99 00:16:08.896 lat (usec): min=401, max=74449, avg=9376.78, stdev=15449.08 00:16:08.896 clat percentiles (usec): 00:16:08.896 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 816], 20.00th=[ 1139], 00:16:08.896 | 30.00th=[ 2606], 40.00th=[ 3621], 50.00th=[ 4883], 60.00th=[ 5473], 00:16:08.896 | 70.00th=[ 6128], 80.00th=[ 9503], 90.00th=[17171], 95.00th=[58983], 00:16:08.896 | 99.00th=[65799], 99.50th=[67634], 99.90th=[71828], 99.95th=[72877], 00:16:08.896 | 99.99th=[73925] 00:16:08.896 bw ( KiB/s): min= 1008, max=42576, per=89.14%, avg=24966.10, stdev=13392.79, samples=21 00:16:08.896 iops : min= 252, max=10644, avg=6241.52, stdev=3348.20, samples=21 00:16:08.896 lat (usec) : 500=0.04%, 750=3.25%, 1000=4.85% 00:16:08.896 lat (msec) : 2=5.24%, 4=8.41%, 10=19.36%, 20=5.52%, 50=47.40% 00:16:08.896 lat (msec) : 100=4.87%, 250=1.04%, 500=0.03% 00:16:08.896 cpu : usr=99.35%, sys=0.12%, ctx=47, majf=0, minf=5579 00:16:08.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:08.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.896 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.896 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.896 second_half: (groupid=0, jobs=1): err= 0: pid=72592: Tue Nov 19 20:05:39 2024 00:16:08.896 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(255MiB/22537msec) 00:16:08.896 slat (nsec): min=3026, max=25460, avg=4047.01, stdev=958.56 00:16:08.896 clat (usec): min=622, max=308543, avg=34144.33, stdev=17559.17 00:16:08.896 lat (usec): min=627, max=308549, avg=34148.38, stdev=17559.31 00:16:08.896 clat percentiles (msec): 00:16:08.896 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:16:08.896 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:08.896 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:16:08.896 | 99.00th=[ 130], 99.50th=[ 150], 99.90th=[ 213], 99.95th=[ 234], 00:16:08.896 | 99.99th=[ 305] 00:16:08.896 write: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(256MiB/17914msec); 0 zone resets 00:16:08.896 slat (usec): min=3, max=208, avg= 5.74, stdev= 2.66 00:16:08.896 clat (usec): min=363, max=75006, avg=10004.48, stdev=15936.91 00:16:08.896 lat (usec): min=372, max=75012, avg=10010.22, stdev=15937.00 00:16:08.896 clat percentiles (usec): 00:16:08.896 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 1074], 00:16:08.896 | 30.00th=[ 2507], 40.00th=[ 3687], 50.00th=[ 4555], 60.00th=[ 5276], 00:16:08.896 | 70.00th=[ 6456], 80.00th=[11338], 90.00th=[27132], 95.00th=[58983], 00:16:08.896 | 99.00th=[65799], 99.50th=[68682], 99.90th=[71828], 99.95th=[72877], 00:16:08.896 | 99.99th=[74974] 00:16:08.896 bw ( KiB/s): min= 4872, max=42240, per=89.14%, avg=24966.10, stdev=12037.34, samples=21 00:16:08.896 iops : min= 1218, max=10560, avg=6241.52, stdev=3009.34, samples=21 00:16:08.896 lat (usec) : 500=0.03%, 750=3.62%, 1000=5.51% 00:16:08.896 lat (msec) : 2=4.72%, 4=7.66%, 10=18.68%, 20=5.94%, 50=47.77% 00:16:08.896 lat (msec) : 100=5.10%, 250=0.96%, 500=0.01% 00:16:08.896 cpu : usr=99.30%, sys=0.08%, ctx=29, majf=0, minf=5542 00:16:08.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:08.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.896 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.896 issued rwts: total=65252,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.896 00:16:08.896 Run status group 0 (all jobs): 00:16:08.896 READ: bw=22.6MiB/s (23.7MB/s), 11.3MiB/s-11.4MiB/s (11.9MB/s-11.9MB/s), io=510MiB (534MB), run=22430-22537msec 00:16:08.896 WRITE: bw=27.4MiB/s (28.7MB/s), 13.7MiB/s-14.3MiB/s (14.3MB/s-15.0MB/s), io=512MiB (537MB), run=17914-18719msec 00:16:08.896 ----------------------------------------------------- 00:16:08.896 Suppressions used: 00:16:08.896 count bytes template 00:16:08.896 2 10 /usr/src/fio/parse.c 00:16:08.896 2 192 /usr/src/fio/iolog.c 00:16:08.896 1 8 libtcmalloc_minimal.so 00:16:08.896 1 904 libcrypto.so 00:16:08.896 ----------------------------------------------------- 00:16:08.896 00:16:08.896 20:05:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:08.896 20:05:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.896 20:05:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.896 20:05:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.896 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:08.896 fio-3.35 00:16:08.896 Starting 1 thread 00:16:27.020 00:16:27.020 test: (groupid=0, jobs=1): err= 0: pid=72899: Tue Nov 19 20:05:59 2024 00:16:27.020 read: IOPS=6556, BW=25.6MiB/s (26.9MB/s)(255MiB/9945msec) 00:16:27.020 slat (nsec): min=2954, max=33838, avg=4571.20, stdev=1121.74 00:16:27.020 clat (usec): min=965, max=37136, avg=19515.09, stdev=2540.11 00:16:27.020 lat (usec): min=973, max=37140, avg=19519.66, stdev=2540.09 00:16:27.020 clat percentiles (usec): 00:16:27.020 | 1.00th=[14877], 5.00th=[15664], 10.00th=[16450], 20.00th=[17433], 00:16:27.020 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:16:27.020 | 70.00th=[20317], 80.00th=[21103], 90.00th=[22676], 95.00th=[24249], 00:16:27.020 | 99.00th=[27132], 99.50th=[27919], 99.90th=[29492], 99.95th=[33424], 00:16:27.020 | 99.99th=[36963] 00:16:27.020 write: IOPS=9940, BW=38.8MiB/s (40.7MB/s)(256MiB/6593msec); 0 zone resets 00:16:27.020 slat (usec): min=4, max=1543, avg= 7.16, stdev= 7.64 00:16:27.020 clat (usec): min=624, max=77088, avg=12818.55, stdev=15950.43 00:16:27.021 lat (usec): min=629, max=77094, avg=12825.71, stdev=15950.41 00:16:27.021 clat percentiles (usec): 00:16:27.021 | 1.00th=[ 1188], 5.00th=[ 1483], 10.00th=[ 1680], 20.00th=[ 1958], 00:16:27.021 | 30.00th=[ 2278], 40.00th=[ 3294], 50.00th=[ 5997], 60.00th=[ 8160], 00:16:27.021 | 70.00th=[11863], 80.00th=[17957], 90.00th=[45876], 95.00th=[49546], 00:16:27.021 | 99.00th=[54789], 99.50th=[56886], 99.90th=[60556], 99.95th=[61604], 00:16:27.021 | 99.99th=[74974] 00:16:27.021 bw ( KiB/s): min= 5041, max=79648, per=94.15%, avg=37435.50, stdev=15900.28, samples=14 00:16:27.021 iops : min= 1260, max=19912, avg=9358.86, stdev=3975.11, samples=14 00:16:27.021 lat (usec) : 750=0.01%, 1000=0.12% 00:16:27.021 lat (msec) : 2=10.73%, 4=9.90%, 10=12.17%, 20=40.21%, 50=24.77% 00:16:27.021 lat (msec) : 100=2.10% 00:16:27.021 cpu : usr=99.07%, sys=0.16%, ctx=24, majf=0, minf=5565 00:16:27.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:27.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.021 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.021 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.021 00:16:27.021 Run status group 0 (all jobs): 00:16:27.021 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=255MiB (267MB), run=9945-9945msec 00:16:27.021 WRITE: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=256MiB (268MB), run=6593-6593msec 00:16:27.964 ----------------------------------------------------- 00:16:27.964 Suppressions used: 00:16:27.964 count bytes template 00:16:27.964 1 5 /usr/src/fio/parse.c 00:16:27.964 2 192 /usr/src/fio/iolog.c 00:16:27.964 1 8 libtcmalloc_minimal.so 00:16:27.964 1 904 libcrypto.so 00:16:27.964 ----------------------------------------------------- 00:16:27.964 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:27.964 Remove shared memory files 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57091 /dev/shm/spdk_tgt_trace.pid71234 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:27.964 ************************************ 00:16:27.964 END TEST ftl_fio_basic 00:16:27.964 ************************************ 00:16:27.964 00:16:27.964 real 1m5.019s 00:16:27.964 user 2m13.287s 00:16:27.964 sys 0m7.168s 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.964 20:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:27.964 20:06:01 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:27.964 20:06:01 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:27.964 20:06:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.964 20:06:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:28.227 ************************************ 00:16:28.227 START TEST ftl_bdevperf 00:16:28.227 ************************************ 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:28.227 * Looking for test storage... 00:16:28.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:28.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.227 --rc genhtml_branch_coverage=1 00:16:28.227 --rc genhtml_function_coverage=1 00:16:28.227 --rc genhtml_legend=1 00:16:28.227 --rc geninfo_all_blocks=1 00:16:28.227 --rc geninfo_unexecuted_blocks=1 00:16:28.227 00:16:28.227 ' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:28.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.227 --rc genhtml_branch_coverage=1 00:16:28.227 --rc genhtml_function_coverage=1 00:16:28.227 --rc genhtml_legend=1 00:16:28.227 --rc geninfo_all_blocks=1 00:16:28.227 --rc geninfo_unexecuted_blocks=1 00:16:28.227 00:16:28.227 ' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:28.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.227 --rc genhtml_branch_coverage=1 00:16:28.227 --rc genhtml_function_coverage=1 00:16:28.227 --rc genhtml_legend=1 00:16:28.227 --rc geninfo_all_blocks=1 00:16:28.227 --rc geninfo_unexecuted_blocks=1 00:16:28.227 00:16:28.227 ' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:28.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.227 --rc genhtml_branch_coverage=1 00:16:28.227 --rc genhtml_function_coverage=1 00:16:28.227 --rc genhtml_legend=1 00:16:28.227 --rc geninfo_all_blocks=1 00:16:28.227 --rc geninfo_unexecuted_blocks=1 00:16:28.227 00:16:28.227 ' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:28.227 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73170 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73170 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73170 ']' 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.228 20:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:28.228 [2024-11-19 20:06:02.016081] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:28.228 [2024-11-19 20:06:02.016389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:16:28.490 [2024-11-19 20:06:02.176719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.751 [2024-11-19 20:06:02.283880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:29.324 20:06:02 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:29.586 { 00:16:29.586 "name": "nvme0n1", 00:16:29.586 "aliases": [ 00:16:29.586 "9b3943fb-93fb-4203-8167-a55c3cce1853" 00:16:29.586 ], 00:16:29.586 "product_name": "NVMe disk", 00:16:29.586 "block_size": 4096, 00:16:29.586 "num_blocks": 1310720, 00:16:29.586 "uuid": "9b3943fb-93fb-4203-8167-a55c3cce1853", 00:16:29.586 "numa_id": -1, 00:16:29.586 "assigned_rate_limits": { 00:16:29.586 "rw_ios_per_sec": 0, 00:16:29.586 "rw_mbytes_per_sec": 0, 00:16:29.586 "r_mbytes_per_sec": 0, 00:16:29.586 "w_mbytes_per_sec": 0 00:16:29.586 }, 00:16:29.586 "claimed": true, 00:16:29.586 "claim_type": "read_many_write_one", 00:16:29.586 "zoned": false, 00:16:29.586 "supported_io_types": { 00:16:29.586 "read": true, 00:16:29.586 "write": true, 00:16:29.586 "unmap": true, 00:16:29.586 "flush": true, 00:16:29.586 "reset": true, 00:16:29.586 "nvme_admin": true, 00:16:29.586 "nvme_io": true, 00:16:29.586 "nvme_io_md": false, 00:16:29.586 "write_zeroes": true, 00:16:29.586 "zcopy": false, 00:16:29.586 "get_zone_info": false, 00:16:29.586 "zone_management": false, 00:16:29.586 "zone_append": false, 00:16:29.586 "compare": true, 00:16:29.586 "compare_and_write": false, 00:16:29.586 "abort": true, 00:16:29.586 "seek_hole": false, 00:16:29.586 "seek_data": false, 00:16:29.586 "copy": true, 00:16:29.586 "nvme_iov_md": false 00:16:29.586 }, 00:16:29.586 "driver_specific": { 00:16:29.586 "nvme": [ 00:16:29.586 { 00:16:29.586 "pci_address": "0000:00:11.0", 00:16:29.586 "trid": { 00:16:29.586 "trtype": "PCIe", 00:16:29.586 "traddr": "0000:00:11.0" 00:16:29.586 }, 00:16:29.586 "ctrlr_data": { 00:16:29.586 "cntlid": 0, 00:16:29.586 "vendor_id": "0x1b36", 00:16:29.586 "model_number": "QEMU NVMe Ctrl", 00:16:29.586 "serial_number": "12341", 00:16:29.586 "firmware_revision": "8.0.0", 00:16:29.586 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:29.586 "oacs": { 00:16:29.586 "security": 0, 00:16:29.586 "format": 1, 00:16:29.586 "firmware": 0, 00:16:29.586 "ns_manage": 1 00:16:29.586 }, 00:16:29.586 "multi_ctrlr": false, 00:16:29.586 "ana_reporting": false 00:16:29.586 }, 00:16:29.586 "vs": { 00:16:29.586 "nvme_version": "1.4" 00:16:29.586 }, 00:16:29.586 "ns_data": { 00:16:29.586 "id": 1, 00:16:29.586 "can_share": false 00:16:29.586 } 00:16:29.586 } 00:16:29.586 ], 00:16:29.586 "mp_policy": "active_passive" 00:16:29.586 } 00:16:29.586 } 00:16:29.586 ]' 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:29.586 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:29.848 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0f4abe60-61ac-4519-9078-023f8313270f 00:16:29.848 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:29.848 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f4abe60-61ac-4519-9078-023f8313270f 00:16:30.110 20:06:03 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:30.371 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2f7ab50c-8b60-4e01-98bf-1fd00cda08de 00:16:30.371 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2f7ab50c-8b60-4e01-98bf-1fd00cda08de 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:30.632 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:30.894 { 00:16:30.894 "name": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:30.894 "aliases": [ 00:16:30.894 "lvs/nvme0n1p0" 00:16:30.894 ], 00:16:30.894 "product_name": "Logical Volume", 00:16:30.894 "block_size": 4096, 00:16:30.894 "num_blocks": 26476544, 00:16:30.894 "uuid": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:30.894 "assigned_rate_limits": { 00:16:30.894 "rw_ios_per_sec": 0, 00:16:30.894 "rw_mbytes_per_sec": 0, 00:16:30.894 "r_mbytes_per_sec": 0, 00:16:30.894 "w_mbytes_per_sec": 0 00:16:30.894 }, 00:16:30.894 "claimed": false, 00:16:30.894 "zoned": false, 00:16:30.894 "supported_io_types": { 00:16:30.894 "read": true, 00:16:30.894 "write": true, 00:16:30.894 "unmap": true, 00:16:30.894 "flush": false, 00:16:30.894 "reset": true, 00:16:30.894 "nvme_admin": false, 00:16:30.894 "nvme_io": false, 00:16:30.894 "nvme_io_md": false, 00:16:30.894 "write_zeroes": true, 00:16:30.894 "zcopy": false, 00:16:30.894 "get_zone_info": false, 00:16:30.894 "zone_management": false, 00:16:30.894 "zone_append": false, 00:16:30.894 "compare": false, 00:16:30.894 "compare_and_write": false, 00:16:30.894 "abort": false, 00:16:30.894 "seek_hole": true, 00:16:30.894 "seek_data": true, 00:16:30.894 "copy": false, 00:16:30.894 "nvme_iov_md": false 00:16:30.894 }, 00:16:30.894 "driver_specific": { 00:16:30.894 "lvol": { 00:16:30.894 "lvol_store_uuid": "2f7ab50c-8b60-4e01-98bf-1fd00cda08de", 00:16:30.894 "base_bdev": "nvme0n1", 00:16:30.894 "thin_provision": true, 00:16:30.894 "num_allocated_clusters": 0, 00:16:30.894 "snapshot": false, 00:16:30.894 "clone": false, 00:16:30.894 "esnap_clone": false 00:16:30.894 } 00:16:30.894 } 00:16:30.894 } 00:16:30.894 ]' 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:30.894 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:31.155 20:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.416 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:31.416 { 00:16:31.416 "name": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:31.416 "aliases": [ 00:16:31.416 "lvs/nvme0n1p0" 00:16:31.416 ], 00:16:31.416 "product_name": "Logical Volume", 00:16:31.416 "block_size": 4096, 00:16:31.416 "num_blocks": 26476544, 00:16:31.416 "uuid": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:31.416 "assigned_rate_limits": { 00:16:31.416 "rw_ios_per_sec": 0, 00:16:31.416 "rw_mbytes_per_sec": 0, 00:16:31.416 "r_mbytes_per_sec": 0, 00:16:31.417 "w_mbytes_per_sec": 0 00:16:31.417 }, 00:16:31.417 "claimed": false, 00:16:31.417 "zoned": false, 00:16:31.417 "supported_io_types": { 00:16:31.417 "read": true, 00:16:31.417 "write": true, 00:16:31.417 "unmap": true, 00:16:31.417 "flush": false, 00:16:31.417 "reset": true, 00:16:31.417 "nvme_admin": false, 00:16:31.417 "nvme_io": false, 00:16:31.417 "nvme_io_md": false, 00:16:31.417 "write_zeroes": true, 00:16:31.417 "zcopy": false, 00:16:31.417 "get_zone_info": false, 00:16:31.417 "zone_management": false, 00:16:31.417 "zone_append": false, 00:16:31.417 "compare": false, 00:16:31.417 "compare_and_write": false, 00:16:31.417 "abort": false, 00:16:31.417 "seek_hole": true, 00:16:31.417 "seek_data": true, 00:16:31.417 "copy": false, 00:16:31.417 "nvme_iov_md": false 00:16:31.417 }, 00:16:31.417 "driver_specific": { 00:16:31.417 "lvol": { 00:16:31.417 "lvol_store_uuid": "2f7ab50c-8b60-4e01-98bf-1fd00cda08de", 00:16:31.417 "base_bdev": "nvme0n1", 00:16:31.417 "thin_provision": true, 00:16:31.417 "num_allocated_clusters": 0, 00:16:31.417 "snapshot": false, 00:16:31.417 "clone": false, 00:16:31.417 "esnap_clone": false 00:16:31.417 } 00:16:31.417 } 00:16:31.417 } 00:16:31.417 ]' 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:31.417 20:06:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:31.678 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4909a3b-07c0-47ee-afab-00ce4455571f 00:16:31.937 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:31.937 { 00:16:31.937 "name": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:31.937 "aliases": [ 00:16:31.937 "lvs/nvme0n1p0" 00:16:31.937 ], 00:16:31.937 "product_name": "Logical Volume", 00:16:31.937 "block_size": 4096, 00:16:31.937 "num_blocks": 26476544, 00:16:31.937 "uuid": "b4909a3b-07c0-47ee-afab-00ce4455571f", 00:16:31.937 "assigned_rate_limits": { 00:16:31.937 "rw_ios_per_sec": 0, 00:16:31.937 "rw_mbytes_per_sec": 0, 00:16:31.937 "r_mbytes_per_sec": 0, 00:16:31.937 "w_mbytes_per_sec": 0 00:16:31.938 }, 00:16:31.938 "claimed": false, 00:16:31.938 "zoned": false, 00:16:31.938 "supported_io_types": { 00:16:31.938 "read": true, 00:16:31.938 "write": true, 00:16:31.938 "unmap": true, 00:16:31.938 "flush": false, 00:16:31.938 "reset": true, 00:16:31.938 "nvme_admin": false, 00:16:31.938 "nvme_io": false, 00:16:31.938 "nvme_io_md": false, 00:16:31.938 "write_zeroes": true, 00:16:31.938 "zcopy": false, 00:16:31.938 "get_zone_info": false, 00:16:31.938 "zone_management": false, 00:16:31.938 "zone_append": false, 00:16:31.938 "compare": false, 00:16:31.938 "compare_and_write": false, 00:16:31.938 "abort": false, 00:16:31.938 "seek_hole": true, 00:16:31.938 "seek_data": true, 00:16:31.938 "copy": false, 00:16:31.938 "nvme_iov_md": false 00:16:31.938 }, 00:16:31.938 "driver_specific": { 00:16:31.938 "lvol": { 00:16:31.938 "lvol_store_uuid": "2f7ab50c-8b60-4e01-98bf-1fd00cda08de", 00:16:31.938 "base_bdev": "nvme0n1", 00:16:31.938 "thin_provision": true, 00:16:31.938 "num_allocated_clusters": 0, 00:16:31.938 "snapshot": false, 00:16:31.938 "clone": false, 00:16:31.938 "esnap_clone": false 00:16:31.938 } 00:16:31.938 } 00:16:31.938 } 00:16:31.938 ]' 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:31.938 20:06:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b4909a3b-07c0-47ee-afab-00ce4455571f -c nvc0n1p0 --l2p_dram_limit 20 00:16:31.938 [2024-11-19 20:06:05.721078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.721120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:31.938 [2024-11-19 20:06:05.721130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.938 [2024-11-19 20:06:05.721138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.721179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.721189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.938 [2024-11-19 20:06:05.721196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:31.938 [2024-11-19 20:06:05.721203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.721216] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:31.938 [2024-11-19 20:06:05.721824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:31.938 [2024-11-19 20:06:05.721842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.721850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.938 [2024-11-19 20:06:05.721856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:16:31.938 [2024-11-19 20:06:05.721864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.721924] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 54de003a-c685-4b02-b3f2-bc93a8ab760a 00:16:31.938 [2024-11-19 20:06:05.722944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.722962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:31.938 [2024-11-19 20:06:05.722971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:31.938 [2024-11-19 20:06:05.722979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.727658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.727684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.938 [2024-11-19 20:06:05.727692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:16:31.938 [2024-11-19 20:06:05.727698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.727764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.727771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.938 [2024-11-19 20:06:05.727781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:31.938 [2024-11-19 20:06:05.727787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.727827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.938 [2024-11-19 20:06:05.727834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:31.938 [2024-11-19 20:06:05.727842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:31.938 [2024-11-19 20:06:05.727847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.938 [2024-11-19 20:06:05.727864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:32.199 [2024-11-19 20:06:05.730730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.199 [2024-11-19 20:06:05.730755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.199 [2024-11-19 20:06:05.730762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:16:32.199 [2024-11-19 20:06:05.730769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.199 [2024-11-19 20:06:05.730792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.199 [2024-11-19 20:06:05.730800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:32.199 [2024-11-19 20:06:05.730807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:32.199 [2024-11-19 20:06:05.730814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.199 [2024-11-19 20:06:05.730824] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:32.199 [2024-11-19 20:06:05.730931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:32.199 [2024-11-19 20:06:05.730940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:32.199 [2024-11-19 20:06:05.730949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:32.199 [2024-11-19 20:06:05.730957] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:32.199 [2024-11-19 20:06:05.730965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:32.200 [2024-11-19 20:06:05.730971] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:32.200 [2024-11-19 20:06:05.730978] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:32.200 [2024-11-19 20:06:05.730984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:32.200 [2024-11-19 20:06:05.730990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:32.200 [2024-11-19 20:06:05.730996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.200 [2024-11-19 20:06:05.731005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:32.200 [2024-11-19 20:06:05.731011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:16:32.200 [2024-11-19 20:06:05.731017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.200 [2024-11-19 20:06:05.731078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.200 [2024-11-19 20:06:05.731085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:32.200 [2024-11-19 20:06:05.731091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:32.200 [2024-11-19 20:06:05.731099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.200 [2024-11-19 20:06:05.731166] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:32.200 [2024-11-19 20:06:05.731174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:32.200 [2024-11-19 20:06:05.731181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:32.200 [2024-11-19 20:06:05.731201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:32.200 [2024-11-19 20:06:05.731217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.200 [2024-11-19 20:06:05.731241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:32.200 [2024-11-19 20:06:05.731248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:32.200 [2024-11-19 20:06:05.731254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.200 [2024-11-19 20:06:05.731266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:32.200 [2024-11-19 20:06:05.731272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:32.200 [2024-11-19 20:06:05.731281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:32.200 [2024-11-19 20:06:05.731292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:32.200 [2024-11-19 20:06:05.731309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:32.200 [2024-11-19 20:06:05.731326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:32.200 [2024-11-19 20:06:05.731342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:32.200 [2024-11-19 20:06:05.731359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:32.200 [2024-11-19 20:06:05.731377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.200 [2024-11-19 20:06:05.731388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:32.200 [2024-11-19 20:06:05.731394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:32.200 [2024-11-19 20:06:05.731400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.200 [2024-11-19 20:06:05.731406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:32.200 [2024-11-19 20:06:05.731412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:32.200 [2024-11-19 20:06:05.731425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:32.200 [2024-11-19 20:06:05.731436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:32.200 [2024-11-19 20:06:05.731441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731448] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:32.200 [2024-11-19 20:06:05.731453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:32.200 [2024-11-19 20:06:05.731460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.200 [2024-11-19 20:06:05.731475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:32.200 [2024-11-19 20:06:05.731480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:32.200 [2024-11-19 20:06:05.731486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:32.200 [2024-11-19 20:06:05.731491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:32.200 [2024-11-19 20:06:05.731497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:32.200 [2024-11-19 20:06:05.731502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:32.200 [2024-11-19 20:06:05.731511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:32.200 [2024-11-19 20:06:05.731518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:32.200 [2024-11-19 20:06:05.731531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:32.200 [2024-11-19 20:06:05.731539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:32.200 [2024-11-19 20:06:05.731544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:32.200 [2024-11-19 20:06:05.731551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:32.200 [2024-11-19 20:06:05.731556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:32.200 [2024-11-19 20:06:05.731562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:32.200 [2024-11-19 20:06:05.731568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:32.200 [2024-11-19 20:06:05.731576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:32.200 [2024-11-19 20:06:05.731582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:32.200 [2024-11-19 20:06:05.731613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:32.200 [2024-11-19 20:06:05.731619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:32.200 [2024-11-19 20:06:05.731631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:32.200 [2024-11-19 20:06:05.731638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:32.200 [2024-11-19 20:06:05.731643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:32.200 [2024-11-19 20:06:05.731650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.200 [2024-11-19 20:06:05.731657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:32.200 [2024-11-19 20:06:05.731664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:16:32.200 [2024-11-19 20:06:05.731670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.201 [2024-11-19 20:06:05.731695] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:32.201 [2024-11-19 20:06:05.731702] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:36.407 [2024-11-19 20:06:09.426204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.407 [2024-11-19 20:06:09.426310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:36.407 [2024-11-19 20:06:09.426337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3694.484 ms 00:16:36.407 [2024-11-19 20:06:09.426347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.407 [2024-11-19 20:06:09.453572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.407 [2024-11-19 20:06:09.453613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:36.407 [2024-11-19 20:06:09.453626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.872 ms 00:16:36.408 [2024-11-19 20:06:09.453634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.453748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.453758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:36.408 [2024-11-19 20:06:09.453771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:36.408 [2024-11-19 20:06:09.453778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.497531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.497573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:36.408 [2024-11-19 20:06:09.497589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.719 ms 00:16:36.408 [2024-11-19 20:06:09.497597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.497631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.497643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:36.408 [2024-11-19 20:06:09.497653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:36.408 [2024-11-19 20:06:09.497660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.498034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.498051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:36.408 [2024-11-19 20:06:09.498062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:16:36.408 [2024-11-19 20:06:09.498069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.498176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.498185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:36.408 [2024-11-19 20:06:09.498197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:36.408 [2024-11-19 20:06:09.498204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.511461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.511616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:36.408 [2024-11-19 20:06:09.511635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.240 ms 00:16:36.408 [2024-11-19 20:06:09.511643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.522999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:36.408 [2024-11-19 20:06:09.528119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.528150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:36.408 [2024-11-19 20:06:09.528162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.408 ms 00:16:36.408 [2024-11-19 20:06:09.528171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.612445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.612494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:36.408 [2024-11-19 20:06:09.612508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.252 ms 00:16:36.408 [2024-11-19 20:06:09.612518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.612698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.612713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:36.408 [2024-11-19 20:06:09.612722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:36.408 [2024-11-19 20:06:09.612732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.636941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.636986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:36.408 [2024-11-19 20:06:09.636998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.165 ms 00:16:36.408 [2024-11-19 20:06:09.637008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.660772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.660938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:36.408 [2024-11-19 20:06:09.660957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.727 ms 00:16:36.408 [2024-11-19 20:06:09.660966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.661862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.661916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:36.408 [2024-11-19 20:06:09.661930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:16:36.408 [2024-11-19 20:06:09.661940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.741703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.741764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:36.408 [2024-11-19 20:06:09.741778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.712 ms 00:16:36.408 [2024-11-19 20:06:09.741790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.769517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.769570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:36.408 [2024-11-19 20:06:09.769584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.640 ms 00:16:36.408 [2024-11-19 20:06:09.769598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.795667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.795870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:36.408 [2024-11-19 20:06:09.795891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.022 ms 00:16:36.408 [2024-11-19 20:06:09.795902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.822867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.823059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:36.408 [2024-11-19 20:06:09.823082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.606 ms 00:16:36.408 [2024-11-19 20:06:09.823092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.823137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.823153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:36.408 [2024-11-19 20:06:09.823163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:36.408 [2024-11-19 20:06:09.823173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.823303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.408 [2024-11-19 20:06:09.823318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:36.408 [2024-11-19 20:06:09.823328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:36.408 [2024-11-19 20:06:09.823338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.408 [2024-11-19 20:06:09.824458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4102.847 ms, result 0 00:16:36.408 { 00:16:36.408 "name": "ftl0", 00:16:36.408 "uuid": "54de003a-c685-4b02-b3f2-bc93a8ab760a" 00:16:36.408 } 00:16:36.408 20:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:36.408 20:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:36.408 20:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:36.408 20:06:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:36.408 [2024-11-19 20:06:10.128441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:36.408 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:36.408 Zero copy mechanism will not be used. 00:16:36.408 Running I/O for 4 seconds... 00:16:38.741 1158.00 IOPS, 76.90 MiB/s [2024-11-19T20:06:13.531Z] 1023.50 IOPS, 67.97 MiB/s [2024-11-19T20:06:14.508Z] 1015.67 IOPS, 67.45 MiB/s [2024-11-19T20:06:14.508Z] 999.75 IOPS, 66.39 MiB/s 00:16:40.714 Latency(us) 00:16:40.714 [2024-11-19T20:06:14.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.714 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:40.714 ftl0 : 4.00 999.56 66.38 0.00 0.00 1054.20 272.54 3377.62 00:16:40.714 [2024-11-19T20:06:14.508Z] =================================================================================================================== 00:16:40.714 [2024-11-19T20:06:14.508Z] Total : 999.56 66.38 0.00 0.00 1054.20 272.54 3377.62 00:16:40.714 { 00:16:40.714 "results": [ 00:16:40.714 { 00:16:40.714 "job": "ftl0", 00:16:40.714 "core_mask": "0x1", 00:16:40.714 "workload": "randwrite", 00:16:40.714 "status": "finished", 00:16:40.714 "queue_depth": 1, 00:16:40.714 "io_size": 69632, 00:16:40.714 "runtime": 4.001771, 00:16:40.714 "iops": 999.5574459408097, 00:16:40.714 "mibps": 66.37686164450689, 00:16:40.714 "io_failed": 0, 00:16:40.714 "io_timeout": 0, 00:16:40.714 "avg_latency_us": 1054.1973661538461, 00:16:40.714 "min_latency_us": 272.54153846153844, 00:16:40.714 "max_latency_us": 3377.6246153846155 00:16:40.714 } 00:16:40.714 ], 00:16:40.714 "core_count": 1 00:16:40.714 } 00:16:40.714 [2024-11-19 20:06:14.138348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:40.714 20:06:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:40.714 [2024-11-19 20:06:14.246824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:40.714 Running I/O for 4 seconds... 00:16:42.596 6147.00 IOPS, 24.01 MiB/s [2024-11-19T20:06:17.391Z] 5437.50 IOPS, 21.24 MiB/s [2024-11-19T20:06:18.337Z] 4967.00 IOPS, 19.40 MiB/s [2024-11-19T20:06:18.337Z] 4970.00 IOPS, 19.41 MiB/s 00:16:44.543 Latency(us) 00:16:44.543 [2024-11-19T20:06:18.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.543 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.543 ftl0 : 4.04 4957.44 19.37 0.00 0.00 25698.23 365.49 208102.01 00:16:44.543 [2024-11-19T20:06:18.337Z] =================================================================================================================== 00:16:44.543 [2024-11-19T20:06:18.337Z] Total : 4957.44 19.37 0.00 0.00 25698.23 0.00 208102.01 00:16:44.543 [2024-11-19 20:06:18.293087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:44.543 { 00:16:44.543 "results": [ 00:16:44.543 { 00:16:44.543 "job": "ftl0", 00:16:44.543 "core_mask": "0x1", 00:16:44.543 "workload": "randwrite", 00:16:44.543 "status": "finished", 00:16:44.543 "queue_depth": 128, 00:16:44.543 "io_size": 4096, 00:16:44.543 "runtime": 4.035952, 00:16:44.543 "iops": 4957.442506749337, 00:16:44.543 "mibps": 19.365009791989596, 00:16:44.543 "io_failed": 0, 00:16:44.543 "io_timeout": 0, 00:16:44.543 "avg_latency_us": 25698.225915787534, 00:16:44.543 "min_latency_us": 365.48923076923074, 00:16:44.543 "max_latency_us": 208102.00615384616 00:16:44.543 } 00:16:44.543 ], 00:16:44.543 "core_count": 1 00:16:44.543 } 00:16:44.543 20:06:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:44.804 [2024-11-19 20:06:18.413165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:44.804 Running I/O for 4 seconds... 00:16:46.693 4385.00 IOPS, 17.13 MiB/s [2024-11-19T20:06:21.429Z] 4434.00 IOPS, 17.32 MiB/s [2024-11-19T20:06:22.816Z] 4413.33 IOPS, 17.24 MiB/s [2024-11-19T20:06:22.816Z] 4385.50 IOPS, 17.13 MiB/s 00:16:49.022 Latency(us) 00:16:49.022 [2024-11-19T20:06:22.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.022 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:49.022 Verification LBA range: start 0x0 length 0x1400000 00:16:49.023 ftl0 : 4.01 4400.26 17.19 0.00 0.00 29005.10 393.85 41943.04 00:16:49.023 [2024-11-19T20:06:22.817Z] =================================================================================================================== 00:16:49.023 [2024-11-19T20:06:22.817Z] Total : 4400.26 17.19 0.00 0.00 29005.10 0.00 41943.04 00:16:49.023 [2024-11-19 20:06:22.444558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:49.023 { 00:16:49.023 "results": [ 00:16:49.023 { 00:16:49.023 "job": "ftl0", 00:16:49.023 "core_mask": "0x1", 00:16:49.023 "workload": "verify", 00:16:49.023 "status": "finished", 00:16:49.023 "verify_range": { 00:16:49.023 "start": 0, 00:16:49.023 "length": 20971520 00:16:49.023 }, 00:16:49.023 "queue_depth": 128, 00:16:49.023 "io_size": 4096, 00:16:49.023 "runtime": 4.014763, 00:16:49.023 "iops": 4400.259741359577, 00:16:49.023 "mibps": 17.188514614685847, 00:16:49.023 "io_failed": 0, 00:16:49.023 "io_timeout": 0, 00:16:49.023 "avg_latency_us": 29005.104625486594, 00:16:49.023 "min_latency_us": 393.84615384615387, 00:16:49.023 "max_latency_us": 41943.04 00:16:49.023 } 00:16:49.023 ], 00:16:49.023 "core_count": 1 00:16:49.023 } 00:16:49.023 20:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:49.023 [2024-11-19 20:06:22.667717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.023 [2024-11-19 20:06:22.667784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.023 [2024-11-19 20:06:22.667801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.023 [2024-11-19 20:06:22.667812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.023 [2024-11-19 20:06:22.667835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.023 [2024-11-19 20:06:22.670814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.023 [2024-11-19 20:06:22.671000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.023 [2024-11-19 20:06:22.671029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:16:49.023 [2024-11-19 20:06:22.671037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.023 [2024-11-19 20:06:22.674413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.023 [2024-11-19 20:06:22.674464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.023 [2024-11-19 20:06:22.674477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:16:49.023 [2024-11-19 20:06:22.674486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.285 [2024-11-19 20:06:22.886834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.886889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.286 [2024-11-19 20:06:22.886909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 212.316 ms 00:16:49.286 [2024-11-19 20:06:22.886917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.893166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.893232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.286 [2024-11-19 20:06:22.893249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:16:49.286 [2024-11-19 20:06:22.893258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.920062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.920111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.286 [2024-11-19 20:06:22.920127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.727 ms 00:16:49.286 [2024-11-19 20:06:22.920135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.937311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.937384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.286 [2024-11-19 20:06:22.937405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.122 ms 00:16:49.286 [2024-11-19 20:06:22.937414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.937576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.937589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.286 [2024-11-19 20:06:22.937604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:49.286 [2024-11-19 20:06:22.937612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.963248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.963442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:49.286 [2024-11-19 20:06:22.963468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.613 ms 00:16:49.286 [2024-11-19 20:06:22.963476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:22.988979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:22.989028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:49.286 [2024-11-19 20:06:22.989042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.392 ms 00:16:49.286 [2024-11-19 20:06:22.989049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:23.013554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:23.013603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.286 [2024-11-19 20:06:23.013618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.452 ms 00:16:49.286 [2024-11-19 20:06:23.013626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:23.038524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.286 [2024-11-19 20:06:23.038710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.286 [2024-11-19 20:06:23.038739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.803 ms 00:16:49.286 [2024-11-19 20:06:23.038746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.286 [2024-11-19 20:06:23.038857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.286 [2024-11-19 20:06:23.038888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.038997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.286 [2024-11-19 20:06:23.039400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.287 [2024-11-19 20:06:23.039870] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.287 [2024-11-19 20:06:23.039880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 54de003a-c685-4b02-b3f2-bc93a8ab760a 00:16:49.287 [2024-11-19 20:06:23.039888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.287 [2024-11-19 20:06:23.039897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.287 [2024-11-19 20:06:23.039907] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.287 [2024-11-19 20:06:23.039917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.287 [2024-11-19 20:06:23.039924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.287 [2024-11-19 20:06:23.039933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.287 [2024-11-19 20:06:23.039941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.287 [2024-11-19 20:06:23.039951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.287 [2024-11-19 20:06:23.039958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.287 [2024-11-19 20:06:23.039968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.287 [2024-11-19 20:06:23.039976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.287 [2024-11-19 20:06:23.039987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:16:49.287 [2024-11-19 20:06:23.039994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.287 [2024-11-19 20:06:23.053645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.287 [2024-11-19 20:06:23.053834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.287 [2024-11-19 20:06:23.053857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.594 ms 00:16:49.287 [2024-11-19 20:06:23.053866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.287 [2024-11-19 20:06:23.054289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.287 [2024-11-19 20:06:23.054303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.287 [2024-11-19 20:06:23.054315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:16:49.287 [2024-11-19 20:06:23.054322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.093173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.093396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.548 [2024-11-19 20:06:23.093426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.093436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.093513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.093522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.548 [2024-11-19 20:06:23.093534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.093541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.093627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.093640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.548 [2024-11-19 20:06:23.093651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.093659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.093676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.093684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.548 [2024-11-19 20:06:23.093694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.093702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.177538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.177597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.548 [2024-11-19 20:06:23.177615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.177623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.548 [2024-11-19 20:06:23.246090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.548 [2024-11-19 20:06:23.246263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.548 [2024-11-19 20:06:23.246344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.548 [2024-11-19 20:06:23.246487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.548 [2024-11-19 20:06:23.246549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.548 [2024-11-19 20:06:23.246619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.548 [2024-11-19 20:06:23.246696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.548 [2024-11-19 20:06:23.246707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.548 [2024-11-19 20:06:23.246716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.548 [2024-11-19 20:06:23.246860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 579.092 ms, result 0 00:16:49.548 true 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73170 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73170 ']' 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73170 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73170 00:16:49.548 killing process with pid 73170 00:16:49.548 Received shutdown signal, test time was about 4.000000 seconds 00:16:49.548 00:16:49.548 Latency(us) 00:16:49.548 [2024-11-19T20:06:23.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.548 [2024-11-19T20:06:23.342Z] =================================================================================================================== 00:16:49.548 [2024-11-19T20:06:23.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73170' 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73170 00:16:49.548 20:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73170 00:16:50.493 Remove shared memory files 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:50.493 00:16:50.493 real 0m22.333s 00:16:50.493 user 0m24.770s 00:16:50.493 sys 0m1.038s 00:16:50.493 ************************************ 00:16:50.493 END TEST ftl_bdevperf 00:16:50.493 ************************************ 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.493 20:06:24 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.493 20:06:24 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:50.493 20:06:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:50.493 20:06:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.493 20:06:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:50.493 ************************************ 00:16:50.493 START TEST ftl_trim 00:16:50.493 ************************************ 00:16:50.493 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:50.493 * Looking for test storage... 00:16:50.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.493 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.493 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.493 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.755 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.755 20:06:24 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.756 20:06:24 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.756 --rc genhtml_branch_coverage=1 00:16:50.756 --rc genhtml_function_coverage=1 00:16:50.756 --rc genhtml_legend=1 00:16:50.756 --rc geninfo_all_blocks=1 00:16:50.756 --rc geninfo_unexecuted_blocks=1 00:16:50.756 00:16:50.756 ' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.756 --rc genhtml_branch_coverage=1 00:16:50.756 --rc genhtml_function_coverage=1 00:16:50.756 --rc genhtml_legend=1 00:16:50.756 --rc geninfo_all_blocks=1 00:16:50.756 --rc geninfo_unexecuted_blocks=1 00:16:50.756 00:16:50.756 ' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.756 --rc genhtml_branch_coverage=1 00:16:50.756 --rc genhtml_function_coverage=1 00:16:50.756 --rc genhtml_legend=1 00:16:50.756 --rc geninfo_all_blocks=1 00:16:50.756 --rc geninfo_unexecuted_blocks=1 00:16:50.756 00:16:50.756 ' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.756 --rc genhtml_branch_coverage=1 00:16:50.756 --rc genhtml_function_coverage=1 00:16:50.756 --rc genhtml_legend=1 00:16:50.756 --rc geninfo_all_blocks=1 00:16:50.756 --rc geninfo_unexecuted_blocks=1 00:16:50.756 00:16:50.756 ' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73529 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73529 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73529 ']' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.756 20:06:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 20:06:24 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:50.756 [2024-11-19 20:06:24.463372] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:50.756 [2024-11-19 20:06:24.463785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73529 ] 00:16:51.018 [2024-11-19 20:06:24.632162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.018 [2024-11-19 20:06:24.757863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.018 [2024-11-19 20:06:24.758248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.018 [2024-11-19 20:06:24.758284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.963 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.963 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:51.963 20:06:25 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:52.225 20:06:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:52.225 20:06:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:52.225 20:06:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:52.225 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:52.225 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:52.225 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:52.225 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:52.225 20:06:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:52.225 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:52.225 { 00:16:52.225 "name": "nvme0n1", 00:16:52.225 "aliases": [ 00:16:52.225 "be8de76f-a487-4f88-8ae5-518252f975ba" 00:16:52.225 ], 00:16:52.225 "product_name": "NVMe disk", 00:16:52.225 "block_size": 4096, 00:16:52.225 "num_blocks": 1310720, 00:16:52.225 "uuid": "be8de76f-a487-4f88-8ae5-518252f975ba", 00:16:52.225 "numa_id": -1, 00:16:52.225 "assigned_rate_limits": { 00:16:52.225 "rw_ios_per_sec": 0, 00:16:52.225 "rw_mbytes_per_sec": 0, 00:16:52.225 "r_mbytes_per_sec": 0, 00:16:52.225 "w_mbytes_per_sec": 0 00:16:52.225 }, 00:16:52.225 "claimed": true, 00:16:52.225 "claim_type": "read_many_write_one", 00:16:52.225 "zoned": false, 00:16:52.225 "supported_io_types": { 00:16:52.225 "read": true, 00:16:52.225 "write": true, 00:16:52.225 "unmap": true, 00:16:52.225 "flush": true, 00:16:52.225 "reset": true, 00:16:52.225 "nvme_admin": true, 00:16:52.225 "nvme_io": true, 00:16:52.225 "nvme_io_md": false, 00:16:52.225 "write_zeroes": true, 00:16:52.225 "zcopy": false, 00:16:52.225 "get_zone_info": false, 00:16:52.225 "zone_management": false, 00:16:52.225 "zone_append": false, 00:16:52.225 "compare": true, 00:16:52.225 "compare_and_write": false, 00:16:52.225 "abort": true, 00:16:52.225 "seek_hole": false, 00:16:52.225 "seek_data": false, 00:16:52.225 "copy": true, 00:16:52.225 "nvme_iov_md": false 00:16:52.225 }, 00:16:52.225 "driver_specific": { 00:16:52.225 "nvme": [ 00:16:52.225 { 00:16:52.225 "pci_address": "0000:00:11.0", 00:16:52.225 "trid": { 00:16:52.225 "trtype": "PCIe", 00:16:52.225 "traddr": "0000:00:11.0" 00:16:52.225 }, 00:16:52.225 "ctrlr_data": { 00:16:52.225 "cntlid": 0, 00:16:52.225 "vendor_id": "0x1b36", 00:16:52.225 "model_number": "QEMU NVMe Ctrl", 00:16:52.225 "serial_number": "12341", 00:16:52.225 "firmware_revision": "8.0.0", 00:16:52.225 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:52.225 "oacs": { 00:16:52.225 "security": 0, 00:16:52.225 "format": 1, 00:16:52.225 "firmware": 0, 00:16:52.225 "ns_manage": 1 00:16:52.225 }, 00:16:52.225 "multi_ctrlr": false, 00:16:52.225 "ana_reporting": false 00:16:52.225 }, 00:16:52.225 "vs": { 00:16:52.225 "nvme_version": "1.4" 00:16:52.225 }, 00:16:52.225 "ns_data": { 00:16:52.225 "id": 1, 00:16:52.225 "can_share": false 00:16:52.225 } 00:16:52.225 } 00:16:52.225 ], 00:16:52.225 "mp_policy": "active_passive" 00:16:52.225 } 00:16:52.225 } 00:16:52.225 ]' 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:52.487 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:52.487 20:06:26 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:52.487 20:06:26 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:52.487 20:06:26 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:52.487 20:06:26 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:52.487 20:06:26 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:52.750 20:06:26 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2f7ab50c-8b60-4e01-98bf-1fd00cda08de 00:16:52.750 20:06:26 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:52.750 20:06:26 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f7ab50c-8b60-4e01-98bf-1fd00cda08de 00:16:52.750 20:06:26 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:53.011 20:06:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ec585924-d21f-4961-a770-3027dd16f3b3 00:16:53.011 20:06:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec585924-d21f-4961-a770-3027dd16f3b3 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:53.272 20:06:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.272 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.272 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:53.272 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:53.272 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:53.272 20:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:53.533 { 00:16:53.533 "name": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:53.533 "aliases": [ 00:16:53.533 "lvs/nvme0n1p0" 00:16:53.533 ], 00:16:53.533 "product_name": "Logical Volume", 00:16:53.533 "block_size": 4096, 00:16:53.533 "num_blocks": 26476544, 00:16:53.533 "uuid": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:53.533 "assigned_rate_limits": { 00:16:53.533 "rw_ios_per_sec": 0, 00:16:53.533 "rw_mbytes_per_sec": 0, 00:16:53.533 "r_mbytes_per_sec": 0, 00:16:53.533 "w_mbytes_per_sec": 0 00:16:53.533 }, 00:16:53.533 "claimed": false, 00:16:53.533 "zoned": false, 00:16:53.533 "supported_io_types": { 00:16:53.533 "read": true, 00:16:53.533 "write": true, 00:16:53.533 "unmap": true, 00:16:53.533 "flush": false, 00:16:53.533 "reset": true, 00:16:53.533 "nvme_admin": false, 00:16:53.533 "nvme_io": false, 00:16:53.533 "nvme_io_md": false, 00:16:53.533 "write_zeroes": true, 00:16:53.533 "zcopy": false, 00:16:53.533 "get_zone_info": false, 00:16:53.533 "zone_management": false, 00:16:53.533 "zone_append": false, 00:16:53.533 "compare": false, 00:16:53.533 "compare_and_write": false, 00:16:53.533 "abort": false, 00:16:53.533 "seek_hole": true, 00:16:53.533 "seek_data": true, 00:16:53.533 "copy": false, 00:16:53.533 "nvme_iov_md": false 00:16:53.533 }, 00:16:53.533 "driver_specific": { 00:16:53.533 "lvol": { 00:16:53.533 "lvol_store_uuid": "ec585924-d21f-4961-a770-3027dd16f3b3", 00:16:53.533 "base_bdev": "nvme0n1", 00:16:53.533 "thin_provision": true, 00:16:53.533 "num_allocated_clusters": 0, 00:16:53.533 "snapshot": false, 00:16:53.533 "clone": false, 00:16:53.533 "esnap_clone": false 00:16:53.533 } 00:16:53.533 } 00:16:53.533 } 00:16:53.533 ]' 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:53.533 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:53.533 20:06:27 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:53.533 20:06:27 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:53.533 20:06:27 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:53.794 20:06:27 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:53.794 20:06:27 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:53.794 20:06:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.794 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:53.794 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:53.794 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:53.794 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:53.794 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:54.056 { 00:16:54.056 "name": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:54.056 "aliases": [ 00:16:54.056 "lvs/nvme0n1p0" 00:16:54.056 ], 00:16:54.056 "product_name": "Logical Volume", 00:16:54.056 "block_size": 4096, 00:16:54.056 "num_blocks": 26476544, 00:16:54.056 "uuid": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:54.056 "assigned_rate_limits": { 00:16:54.056 "rw_ios_per_sec": 0, 00:16:54.056 "rw_mbytes_per_sec": 0, 00:16:54.056 "r_mbytes_per_sec": 0, 00:16:54.056 "w_mbytes_per_sec": 0 00:16:54.056 }, 00:16:54.056 "claimed": false, 00:16:54.056 "zoned": false, 00:16:54.056 "supported_io_types": { 00:16:54.056 "read": true, 00:16:54.056 "write": true, 00:16:54.056 "unmap": true, 00:16:54.056 "flush": false, 00:16:54.056 "reset": true, 00:16:54.056 "nvme_admin": false, 00:16:54.056 "nvme_io": false, 00:16:54.056 "nvme_io_md": false, 00:16:54.056 "write_zeroes": true, 00:16:54.056 "zcopy": false, 00:16:54.056 "get_zone_info": false, 00:16:54.056 "zone_management": false, 00:16:54.056 "zone_append": false, 00:16:54.056 "compare": false, 00:16:54.056 "compare_and_write": false, 00:16:54.056 "abort": false, 00:16:54.056 "seek_hole": true, 00:16:54.056 "seek_data": true, 00:16:54.056 "copy": false, 00:16:54.056 "nvme_iov_md": false 00:16:54.056 }, 00:16:54.056 "driver_specific": { 00:16:54.056 "lvol": { 00:16:54.056 "lvol_store_uuid": "ec585924-d21f-4961-a770-3027dd16f3b3", 00:16:54.056 "base_bdev": "nvme0n1", 00:16:54.056 "thin_provision": true, 00:16:54.056 "num_allocated_clusters": 0, 00:16:54.056 "snapshot": false, 00:16:54.056 "clone": false, 00:16:54.056 "esnap_clone": false 00:16:54.056 } 00:16:54.056 } 00:16:54.056 } 00:16:54.056 ]' 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:54.056 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:54.056 20:06:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:54.056 20:06:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:54.315 20:06:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:54.315 20:06:27 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:54.315 20:06:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:54.315 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:54.315 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:54.315 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:54.315 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:54.315 20:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 442b3688-4de6-48e2-be1a-8a63b33671fd 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:54.574 { 00:16:54.574 "name": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:54.574 "aliases": [ 00:16:54.574 "lvs/nvme0n1p0" 00:16:54.574 ], 00:16:54.574 "product_name": "Logical Volume", 00:16:54.574 "block_size": 4096, 00:16:54.574 "num_blocks": 26476544, 00:16:54.574 "uuid": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:54.574 "assigned_rate_limits": { 00:16:54.574 "rw_ios_per_sec": 0, 00:16:54.574 "rw_mbytes_per_sec": 0, 00:16:54.574 "r_mbytes_per_sec": 0, 00:16:54.574 "w_mbytes_per_sec": 0 00:16:54.574 }, 00:16:54.574 "claimed": false, 00:16:54.574 "zoned": false, 00:16:54.574 "supported_io_types": { 00:16:54.574 "read": true, 00:16:54.574 "write": true, 00:16:54.574 "unmap": true, 00:16:54.574 "flush": false, 00:16:54.574 "reset": true, 00:16:54.574 "nvme_admin": false, 00:16:54.574 "nvme_io": false, 00:16:54.574 "nvme_io_md": false, 00:16:54.574 "write_zeroes": true, 00:16:54.574 "zcopy": false, 00:16:54.574 "get_zone_info": false, 00:16:54.574 "zone_management": false, 00:16:54.574 "zone_append": false, 00:16:54.574 "compare": false, 00:16:54.574 "compare_and_write": false, 00:16:54.574 "abort": false, 00:16:54.574 "seek_hole": true, 00:16:54.574 "seek_data": true, 00:16:54.574 "copy": false, 00:16:54.574 "nvme_iov_md": false 00:16:54.574 }, 00:16:54.574 "driver_specific": { 00:16:54.574 "lvol": { 00:16:54.574 "lvol_store_uuid": "ec585924-d21f-4961-a770-3027dd16f3b3", 00:16:54.574 "base_bdev": "nvme0n1", 00:16:54.574 "thin_provision": true, 00:16:54.574 "num_allocated_clusters": 0, 00:16:54.574 "snapshot": false, 00:16:54.574 "clone": false, 00:16:54.574 "esnap_clone": false 00:16:54.574 } 00:16:54.574 } 00:16:54.574 } 00:16:54.574 ]' 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:54.574 20:06:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:54.574 20:06:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:54.574 20:06:28 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 442b3688-4de6-48e2-be1a-8a63b33671fd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:54.833 [2024-11-19 20:06:28.391986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.392026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:54.833 [2024-11-19 20:06:28.392039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:54.833 [2024-11-19 20:06:28.392046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.398624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.398727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.833 [2024-11-19 20:06:28.398766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.540 ms 00:16:54.833 [2024-11-19 20:06:28.398791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.399292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:54.833 [2024-11-19 20:06:28.401496] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:54.833 [2024-11-19 20:06:28.401830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.401863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.833 [2024-11-19 20:06:28.401893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:16:54.833 [2024-11-19 20:06:28.401916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.402678] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d764a84c-1666-4f29-b824-623425701006 00:16:54.833 [2024-11-19 20:06:28.404795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.404893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:54.833 [2024-11-19 20:06:28.404928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:54.833 [2024-11-19 20:06:28.404955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.410718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.410750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.833 [2024-11-19 20:06:28.410761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.565 ms 00:16:54.833 [2024-11-19 20:06:28.410772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.410904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.410917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.833 [2024-11-19 20:06:28.410926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:54.833 [2024-11-19 20:06:28.410938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.410972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.410982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:54.833 [2024-11-19 20:06:28.410990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.833 [2024-11-19 20:06:28.410998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.411035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:54.833 [2024-11-19 20:06:28.414558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.414586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.833 [2024-11-19 20:06:28.414601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:16:54.833 [2024-11-19 20:06:28.414608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.414649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.414658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:54.833 [2024-11-19 20:06:28.414668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:54.833 [2024-11-19 20:06:28.414689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.414720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:54.833 [2024-11-19 20:06:28.414852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:54.833 [2024-11-19 20:06:28.414867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:54.833 [2024-11-19 20:06:28.414879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:54.833 [2024-11-19 20:06:28.414890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:54.833 [2024-11-19 20:06:28.414898] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:54.833 [2024-11-19 20:06:28.414908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:54.833 [2024-11-19 20:06:28.414915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:54.833 [2024-11-19 20:06:28.414925] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:54.833 [2024-11-19 20:06:28.414933] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:54.833 [2024-11-19 20:06:28.414943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.414950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:54.833 [2024-11-19 20:06:28.414959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:16:54.833 [2024-11-19 20:06:28.414965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.415077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.833 [2024-11-19 20:06:28.415085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:54.833 [2024-11-19 20:06:28.415095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:54.833 [2024-11-19 20:06:28.415102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.833 [2024-11-19 20:06:28.415274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:54.834 [2024-11-19 20:06:28.415284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:54.834 [2024-11-19 20:06:28.415294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:54.834 [2024-11-19 20:06:28.415319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:54.834 [2024-11-19 20:06:28.415342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.834 [2024-11-19 20:06:28.415357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:54.834 [2024-11-19 20:06:28.415363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:54.834 [2024-11-19 20:06:28.415371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.834 [2024-11-19 20:06:28.415378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:54.834 [2024-11-19 20:06:28.415386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:54.834 [2024-11-19 20:06:28.415392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:54.834 [2024-11-19 20:06:28.415409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:54.834 [2024-11-19 20:06:28.415434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:54.834 [2024-11-19 20:06:28.415455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:54.834 [2024-11-19 20:06:28.415477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:54.834 [2024-11-19 20:06:28.415498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:54.834 [2024-11-19 20:06:28.415521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.834 [2024-11-19 20:06:28.415542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:54.834 [2024-11-19 20:06:28.415549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:54.834 [2024-11-19 20:06:28.415557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.834 [2024-11-19 20:06:28.415563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:54.834 [2024-11-19 20:06:28.415571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:54.834 [2024-11-19 20:06:28.415577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:54.834 [2024-11-19 20:06:28.415591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:54.834 [2024-11-19 20:06:28.415599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415605] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:54.834 [2024-11-19 20:06:28.415614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:54.834 [2024-11-19 20:06:28.415621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.834 [2024-11-19 20:06:28.415636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:54.834 [2024-11-19 20:06:28.415647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:54.834 [2024-11-19 20:06:28.415654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:54.834 [2024-11-19 20:06:28.415664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:54.834 [2024-11-19 20:06:28.415671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:54.834 [2024-11-19 20:06:28.415679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:54.834 [2024-11-19 20:06:28.415688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:54.834 [2024-11-19 20:06:28.415699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:54.834 [2024-11-19 20:06:28.415717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:54.834 [2024-11-19 20:06:28.415724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:54.834 [2024-11-19 20:06:28.415733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:54.834 [2024-11-19 20:06:28.415740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:54.834 [2024-11-19 20:06:28.415748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:54.834 [2024-11-19 20:06:28.415755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:54.834 [2024-11-19 20:06:28.415764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:54.834 [2024-11-19 20:06:28.415770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:54.834 [2024-11-19 20:06:28.415780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:54.834 [2024-11-19 20:06:28.415818] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:54.834 [2024-11-19 20:06:28.415834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:54.834 [2024-11-19 20:06:28.415850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:54.834 [2024-11-19 20:06:28.415857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:54.834 [2024-11-19 20:06:28.415866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:54.834 [2024-11-19 20:06:28.415872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.834 [2024-11-19 20:06:28.415881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:54.835 [2024-11-19 20:06:28.415888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:16:54.835 [2024-11-19 20:06:28.415896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.835 [2024-11-19 20:06:28.415980] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:54.835 [2024-11-19 20:06:28.415997] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:57.364 [2024-11-19 20:06:30.644396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.644458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:57.364 [2024-11-19 20:06:30.644474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2228.406 ms 00:16:57.364 [2024-11-19 20:06:30.644485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.670337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.670551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.364 [2024-11-19 20:06:30.670570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.610 ms 00:16:57.364 [2024-11-19 20:06:30.670580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.670731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.670743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:57.364 [2024-11-19 20:06:30.670752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:57.364 [2024-11-19 20:06:30.670764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.721111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.721513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.364 [2024-11-19 20:06:30.721564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.286 ms 00:16:57.364 [2024-11-19 20:06:30.721596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.721779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.721816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.364 [2024-11-19 20:06:30.721840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:57.364 [2024-11-19 20:06:30.721865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.722261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.722288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.364 [2024-11-19 20:06:30.722297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:16:57.364 [2024-11-19 20:06:30.722306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.722425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.722434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.364 [2024-11-19 20:06:30.722443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:57.364 [2024-11-19 20:06:30.722454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.736956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.737092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.364 [2024-11-19 20:06:30.737108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.464 ms 00:16:57.364 [2024-11-19 20:06:30.737117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.748577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:57.364 [2024-11-19 20:06:30.763201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.763246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:57.364 [2024-11-19 20:06:30.763257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.976 ms 00:16:57.364 [2024-11-19 20:06:30.763265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.825839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.825882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:57.364 [2024-11-19 20:06:30.825895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.508 ms 00:16:57.364 [2024-11-19 20:06:30.825903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.826117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.826129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.364 [2024-11-19 20:06:30.826142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:57.364 [2024-11-19 20:06:30.826149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.849922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.849954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:57.364 [2024-11-19 20:06:30.849968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:16:57.364 [2024-11-19 20:06:30.849976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.872750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.872780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:57.364 [2024-11-19 20:06:30.872793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:16:57.364 [2024-11-19 20:06:30.872801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.873426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.873445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:57.364 [2024-11-19 20:06:30.873456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:16:57.364 [2024-11-19 20:06:30.873463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.942704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.942881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:57.364 [2024-11-19 20:06:30.942905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.206 ms 00:16:57.364 [2024-11-19 20:06:30.942913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.967242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.967279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:57.364 [2024-11-19 20:06:30.967292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.228 ms 00:16:57.364 [2024-11-19 20:06:30.967300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:30.990755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:30.990788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:57.364 [2024-11-19 20:06:30.990800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.387 ms 00:16:57.364 [2024-11-19 20:06:30.990807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:31.014199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:31.014344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:57.364 [2024-11-19 20:06:31.014364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.308 ms 00:16:57.364 [2024-11-19 20:06:31.014384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:31.014445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:31.014457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:57.364 [2024-11-19 20:06:31.014470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:57.364 [2024-11-19 20:06:31.014477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:31.014558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-11-19 20:06:31.014568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:57.364 [2024-11-19 20:06:31.014577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:57.364 [2024-11-19 20:06:31.014585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-11-19 20:06:31.015411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.364 [2024-11-19 20:06:31.018427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2623.125 ms, result 0 00:16:57.364 [2024-11-19 20:06:31.019374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.364 { 00:16:57.364 "name": "ftl0", 00:16:57.364 "uuid": "d764a84c-1666-4f29-b824-623425701006" 00:16:57.364 } 00:16:57.364 20:06:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.364 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:57.622 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:57.881 [ 00:16:57.881 { 00:16:57.881 "name": "ftl0", 00:16:57.881 "aliases": [ 00:16:57.881 "d764a84c-1666-4f29-b824-623425701006" 00:16:57.881 ], 00:16:57.881 "product_name": "FTL disk", 00:16:57.881 "block_size": 4096, 00:16:57.881 "num_blocks": 23592960, 00:16:57.881 "uuid": "d764a84c-1666-4f29-b824-623425701006", 00:16:57.881 "assigned_rate_limits": { 00:16:57.881 "rw_ios_per_sec": 0, 00:16:57.881 "rw_mbytes_per_sec": 0, 00:16:57.881 "r_mbytes_per_sec": 0, 00:16:57.881 "w_mbytes_per_sec": 0 00:16:57.881 }, 00:16:57.881 "claimed": false, 00:16:57.881 "zoned": false, 00:16:57.881 "supported_io_types": { 00:16:57.881 "read": true, 00:16:57.881 "write": true, 00:16:57.881 "unmap": true, 00:16:57.881 "flush": true, 00:16:57.881 "reset": false, 00:16:57.881 "nvme_admin": false, 00:16:57.881 "nvme_io": false, 00:16:57.881 "nvme_io_md": false, 00:16:57.881 "write_zeroes": true, 00:16:57.881 "zcopy": false, 00:16:57.881 "get_zone_info": false, 00:16:57.881 "zone_management": false, 00:16:57.881 "zone_append": false, 00:16:57.881 "compare": false, 00:16:57.881 "compare_and_write": false, 00:16:57.881 "abort": false, 00:16:57.881 "seek_hole": false, 00:16:57.881 "seek_data": false, 00:16:57.881 "copy": false, 00:16:57.881 "nvme_iov_md": false 00:16:57.881 }, 00:16:57.881 "driver_specific": { 00:16:57.881 "ftl": { 00:16:57.881 "base_bdev": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:57.881 "cache": "nvc0n1p0" 00:16:57.881 } 00:16:57.881 } 00:16:57.881 } 00:16:57.881 ] 00:16:57.881 20:06:31 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:57.881 20:06:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:57.881 20:06:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:57.881 20:06:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:57.881 20:06:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:58.139 20:06:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:58.139 { 00:16:58.139 "name": "ftl0", 00:16:58.139 "aliases": [ 00:16:58.139 "d764a84c-1666-4f29-b824-623425701006" 00:16:58.139 ], 00:16:58.139 "product_name": "FTL disk", 00:16:58.139 "block_size": 4096, 00:16:58.139 "num_blocks": 23592960, 00:16:58.139 "uuid": "d764a84c-1666-4f29-b824-623425701006", 00:16:58.139 "assigned_rate_limits": { 00:16:58.139 "rw_ios_per_sec": 0, 00:16:58.139 "rw_mbytes_per_sec": 0, 00:16:58.139 "r_mbytes_per_sec": 0, 00:16:58.139 "w_mbytes_per_sec": 0 00:16:58.139 }, 00:16:58.139 "claimed": false, 00:16:58.139 "zoned": false, 00:16:58.139 "supported_io_types": { 00:16:58.139 "read": true, 00:16:58.139 "write": true, 00:16:58.139 "unmap": true, 00:16:58.139 "flush": true, 00:16:58.139 "reset": false, 00:16:58.139 "nvme_admin": false, 00:16:58.139 "nvme_io": false, 00:16:58.139 "nvme_io_md": false, 00:16:58.139 "write_zeroes": true, 00:16:58.139 "zcopy": false, 00:16:58.139 "get_zone_info": false, 00:16:58.139 "zone_management": false, 00:16:58.139 "zone_append": false, 00:16:58.139 "compare": false, 00:16:58.139 "compare_and_write": false, 00:16:58.139 "abort": false, 00:16:58.139 "seek_hole": false, 00:16:58.139 "seek_data": false, 00:16:58.139 "copy": false, 00:16:58.139 "nvme_iov_md": false 00:16:58.139 }, 00:16:58.139 "driver_specific": { 00:16:58.139 "ftl": { 00:16:58.139 "base_bdev": "442b3688-4de6-48e2-be1a-8a63b33671fd", 00:16:58.139 "cache": "nvc0n1p0" 00:16:58.139 } 00:16:58.139 } 00:16:58.139 } 00:16:58.139 ]' 00:16:58.139 20:06:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:58.139 20:06:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:58.139 20:06:31 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:58.398 [2024-11-19 20:06:32.075480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.075642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.398 [2024-11-19 20:06:32.075703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.398 [2024-11-19 20:06:32.075720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.075760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.398 [2024-11-19 20:06:32.078381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.078412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.398 [2024-11-19 20:06:32.078427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:16:58.398 [2024-11-19 20:06:32.078437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.079064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.079079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.398 [2024-11-19 20:06:32.079089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:16:58.398 [2024-11-19 20:06:32.079097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.082752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.082777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.398 [2024-11-19 20:06:32.082788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:16:58.398 [2024-11-19 20:06:32.082796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.089813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.089926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:58.398 [2024-11-19 20:06:32.089944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:16:58.398 [2024-11-19 20:06:32.089952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.113684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.113715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.398 [2024-11-19 20:06:32.113730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.649 ms 00:16:58.398 [2024-11-19 20:06:32.113737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.128777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.128897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.398 [2024-11-19 20:06:32.128916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.979 ms 00:16:58.398 [2024-11-19 20:06:32.128926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.129143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.129154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.398 [2024-11-19 20:06:32.129165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:58.398 [2024-11-19 20:06:32.129172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.151997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.152107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:58.398 [2024-11-19 20:06:32.152125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:16:58.398 [2024-11-19 20:06:32.152133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.398 [2024-11-19 20:06:32.174989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.398 [2024-11-19 20:06:32.175096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:58.398 [2024-11-19 20:06:32.175115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.796 ms 00:16:58.398 [2024-11-19 20:06:32.175123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.658 [2024-11-19 20:06:32.197643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.658 [2024-11-19 20:06:32.197754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:58.658 [2024-11-19 20:06:32.197771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.448 ms 00:16:58.658 [2024-11-19 20:06:32.197778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.658 [2024-11-19 20:06:32.220164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.658 [2024-11-19 20:06:32.220194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:58.658 [2024-11-19 20:06:32.220205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.268 ms 00:16:58.658 [2024-11-19 20:06:32.220212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.658 [2024-11-19 20:06:32.220304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:58.658 [2024-11-19 20:06:32.220320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:58.658 [2024-11-19 20:06:32.220792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.220997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:58.659 [2024-11-19 20:06:32.221168] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:58.659 [2024-11-19 20:06:32.221179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:16:58.659 [2024-11-19 20:06:32.221186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:58.659 [2024-11-19 20:06:32.221194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:58.659 [2024-11-19 20:06:32.221201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:58.659 [2024-11-19 20:06:32.221210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:58.659 [2024-11-19 20:06:32.221218] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:58.659 [2024-11-19 20:06:32.221239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:58.659 [2024-11-19 20:06:32.221246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:58.659 [2024-11-19 20:06:32.221254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:58.659 [2024-11-19 20:06:32.221260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:58.659 [2024-11-19 20:06:32.221269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.659 [2024-11-19 20:06:32.221276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:58.659 [2024-11-19 20:06:32.221286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:16:58.659 [2024-11-19 20:06:32.221293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.233801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.659 [2024-11-19 20:06:32.233831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:58.659 [2024-11-19 20:06:32.233846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.457 ms 00:16:58.659 [2024-11-19 20:06:32.233854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.234248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.659 [2024-11-19 20:06:32.234266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:58.659 [2024-11-19 20:06:32.234277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:58.659 [2024-11-19 20:06:32.234284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.278067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.278101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.659 [2024-11-19 20:06:32.278113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.278132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.278248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.278258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.659 [2024-11-19 20:06:32.278269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.278276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.278341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.278350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.659 [2024-11-19 20:06:32.278365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.278372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.278405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.278413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.659 [2024-11-19 20:06:32.278422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.278430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.359298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.359489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.659 [2024-11-19 20:06:32.359508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.359516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.423171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.423208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.659 [2024-11-19 20:06:32.423232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.423241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.423332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.423342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.659 [2024-11-19 20:06:32.423368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.659 [2024-11-19 20:06:32.423378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.659 [2024-11-19 20:06:32.423444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.659 [2024-11-19 20:06:32.423454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.659 [2024-11-19 20:06:32.423464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.660 [2024-11-19 20:06:32.423471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.660 [2024-11-19 20:06:32.423580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.660 [2024-11-19 20:06:32.423591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.660 [2024-11-19 20:06:32.423600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.660 [2024-11-19 20:06:32.423607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.660 [2024-11-19 20:06:32.423670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.660 [2024-11-19 20:06:32.423679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.660 [2024-11-19 20:06:32.423688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.660 [2024-11-19 20:06:32.423694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.660 [2024-11-19 20:06:32.423743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.660 [2024-11-19 20:06:32.423751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.660 [2024-11-19 20:06:32.423761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.660 [2024-11-19 20:06:32.423769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.660 [2024-11-19 20:06:32.423823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.660 [2024-11-19 20:06:32.423838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.660 [2024-11-19 20:06:32.423847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.660 [2024-11-19 20:06:32.423854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.660 [2024-11-19 20:06:32.424034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.540 ms, result 0 00:16:58.660 true 00:16:58.660 20:06:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73529 00:16:58.660 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73529 ']' 00:16:58.660 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73529 00:16:58.660 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73529 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.918 killing process with pid 73529 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73529' 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73529 00:16:58.918 20:06:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73529 00:17:05.481 20:06:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:06.051 65536+0 records in 00:17:06.051 65536+0 records out 00:17:06.051 268435456 bytes (268 MB, 256 MiB) copied, 1.0933 s, 246 MB/s 00:17:06.051 20:06:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.051 [2024-11-19 20:06:39.651658] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:06.051 [2024-11-19 20:06:39.651814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73714 ] 00:17:06.051 [2024-11-19 20:06:39.816146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.313 [2024-11-19 20:06:39.936101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.574 [2024-11-19 20:06:40.227312] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.574 [2024-11-19 20:06:40.227400] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.837 [2024-11-19 20:06:40.390406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.390493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.837 [2024-11-19 20:06:40.390511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:06.837 [2024-11-19 20:06:40.390520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.393557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.393791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.837 [2024-11-19 20:06:40.393814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:17:06.837 [2024-11-19 20:06:40.393822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.393946] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.837 [2024-11-19 20:06:40.394713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.837 [2024-11-19 20:06:40.394743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.394752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.837 [2024-11-19 20:06:40.394763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:17:06.837 [2024-11-19 20:06:40.394771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.397028] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:06.837 [2024-11-19 20:06:40.411843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.411904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:06.837 [2024-11-19 20:06:40.411920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.818 ms 00:17:06.837 [2024-11-19 20:06:40.411930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.412063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.412077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:06.837 [2024-11-19 20:06:40.412087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:06.837 [2024-11-19 20:06:40.412096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.420603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.420653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.837 [2024-11-19 20:06:40.420664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.459 ms 00:17:06.837 [2024-11-19 20:06:40.420673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.420786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.420797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.837 [2024-11-19 20:06:40.420805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:06.837 [2024-11-19 20:06:40.420814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.420844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.420857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.837 [2024-11-19 20:06:40.420866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:06.837 [2024-11-19 20:06:40.420874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.420898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.837 [2024-11-19 20:06:40.425044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.425087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.837 [2024-11-19 20:06:40.425098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:17:06.837 [2024-11-19 20:06:40.425106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.425185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.837 [2024-11-19 20:06:40.425195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.837 [2024-11-19 20:06:40.425205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:06.837 [2024-11-19 20:06:40.425213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.837 [2024-11-19 20:06:40.425253] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:06.837 [2024-11-19 20:06:40.425279] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:06.837 [2024-11-19 20:06:40.425317] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:06.837 [2024-11-19 20:06:40.425334] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:06.837 [2024-11-19 20:06:40.425455] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:06.837 [2024-11-19 20:06:40.425467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.837 [2024-11-19 20:06:40.425479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:06.837 [2024-11-19 20:06:40.425494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.837 [2024-11-19 20:06:40.425508] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.838 [2024-11-19 20:06:40.425525] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.838 [2024-11-19 20:06:40.425532] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:06.838 [2024-11-19 20:06:40.425541] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:06.838 [2024-11-19 20:06:40.425549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.838 [2024-11-19 20:06:40.425558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.838 [2024-11-19 20:06:40.425567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:06.838 [2024-11-19 20:06:40.425574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.838 [2024-11-19 20:06:40.425663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.838 [2024-11-19 20:06:40.425672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.838 [2024-11-19 20:06:40.425683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:06.838 [2024-11-19 20:06:40.425691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.838 [2024-11-19 20:06:40.425794] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.838 [2024-11-19 20:06:40.425805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.838 [2024-11-19 20:06:40.425814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.838 [2024-11-19 20:06:40.425837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.838 [2024-11-19 20:06:40.425858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.838 [2024-11-19 20:06:40.425871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.838 [2024-11-19 20:06:40.425878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.838 [2024-11-19 20:06:40.425884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.838 [2024-11-19 20:06:40.425902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.838 [2024-11-19 20:06:40.425909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:06.838 [2024-11-19 20:06:40.425915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.838 [2024-11-19 20:06:40.425929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.838 [2024-11-19 20:06:40.425949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.838 [2024-11-19 20:06:40.425969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.838 [2024-11-19 20:06:40.425982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.838 [2024-11-19 20:06:40.425988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.838 [2024-11-19 20:06:40.425995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.838 [2024-11-19 20:06:40.426001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.838 [2024-11-19 20:06:40.426008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:06.838 [2024-11-19 20:06:40.426014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.838 [2024-11-19 20:06:40.426020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.838 [2024-11-19 20:06:40.426027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:06.838 [2024-11-19 20:06:40.426033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.838 [2024-11-19 20:06:40.426040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.838 [2024-11-19 20:06:40.426046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:06.838 [2024-11-19 20:06:40.426052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.838 [2024-11-19 20:06:40.426058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:06.838 [2024-11-19 20:06:40.426065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:06.838 [2024-11-19 20:06:40.426072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.426078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:06.838 [2024-11-19 20:06:40.426085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:06.838 [2024-11-19 20:06:40.426092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.426098] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.838 [2024-11-19 20:06:40.426106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.838 [2024-11-19 20:06:40.426115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.838 [2024-11-19 20:06:40.426124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.838 [2024-11-19 20:06:40.426132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.838 [2024-11-19 20:06:40.426139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.838 [2024-11-19 20:06:40.426146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.838 [2024-11-19 20:06:40.426152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.838 [2024-11-19 20:06:40.426158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.838 [2024-11-19 20:06:40.426164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.838 [2024-11-19 20:06:40.426173] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.838 [2024-11-19 20:06:40.426182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.838 [2024-11-19 20:06:40.426197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:06.838 [2024-11-19 20:06:40.426204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.838 [2024-11-19 20:06:40.426211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:06.838 [2024-11-19 20:06:40.426238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:06.838 [2024-11-19 20:06:40.426246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:06.838 [2024-11-19 20:06:40.426253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:06.838 [2024-11-19 20:06:40.426261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:06.838 [2024-11-19 20:06:40.426268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:06.838 [2024-11-19 20:06:40.426276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:06.838 [2024-11-19 20:06:40.426315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.838 [2024-11-19 20:06:40.426324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.838 [2024-11-19 20:06:40.426339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.838 [2024-11-19 20:06:40.426346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.838 [2024-11-19 20:06:40.426353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.838 [2024-11-19 20:06:40.426361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.838 [2024-11-19 20:06:40.426377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.838 [2024-11-19 20:06:40.426390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:17:06.838 [2024-11-19 20:06:40.426398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.838 [2024-11-19 20:06:40.459078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.838 [2024-11-19 20:06:40.459128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.838 [2024-11-19 20:06:40.459141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.626 ms 00:17:06.838 [2024-11-19 20:06:40.459149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.838 [2024-11-19 20:06:40.459308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.838 [2024-11-19 20:06:40.459325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:06.839 [2024-11-19 20:06:40.459335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:06.839 [2024-11-19 20:06:40.459343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.514573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.514853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.839 [2024-11-19 20:06:40.514877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.185 ms 00:17:06.839 [2024-11-19 20:06:40.514894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.515021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.515034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.839 [2024-11-19 20:06:40.515045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:06.839 [2024-11-19 20:06:40.515053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.515633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.515673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.839 [2024-11-19 20:06:40.515684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:06.839 [2024-11-19 20:06:40.515703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.515859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.515870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.839 [2024-11-19 20:06:40.515879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:06.839 [2024-11-19 20:06:40.515887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.532139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.532194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.839 [2024-11-19 20:06:40.532206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.229 ms 00:17:06.839 [2024-11-19 20:06:40.532214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.546864] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:06.839 [2024-11-19 20:06:40.546923] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.839 [2024-11-19 20:06:40.546938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.546947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.839 [2024-11-19 20:06:40.546958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.566 ms 00:17:06.839 [2024-11-19 20:06:40.546966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.573131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.573207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.839 [2024-11-19 20:06:40.573248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.058 ms 00:17:06.839 [2024-11-19 20:06:40.573257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.586569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.586619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.839 [2024-11-19 20:06:40.586632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.202 ms 00:17:06.839 [2024-11-19 20:06:40.586640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.599570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.599620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.839 [2024-11-19 20:06:40.599633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.831 ms 00:17:06.839 [2024-11-19 20:06:40.599640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.839 [2024-11-19 20:06:40.600359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.839 [2024-11-19 20:06:40.600384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.839 [2024-11-19 20:06:40.600396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:06.839 [2024-11-19 20:06:40.600404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.667481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.667549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:07.101 [2024-11-19 20:06:40.667564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.048 ms 00:17:07.101 [2024-11-19 20:06:40.667573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.678961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.101 [2024-11-19 20:06:40.698681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.698920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.101 [2024-11-19 20:06:40.698942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.998 ms 00:17:07.101 [2024-11-19 20:06:40.698952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.699066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.699081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:07.101 [2024-11-19 20:06:40.699092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:07.101 [2024-11-19 20:06:40.699100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.699161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.699171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:07.101 [2024-11-19 20:06:40.699180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:07.101 [2024-11-19 20:06:40.699189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.699248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.699259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:07.101 [2024-11-19 20:06:40.699271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:07.101 [2024-11-19 20:06:40.699279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.699319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:07.101 [2024-11-19 20:06:40.699330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.699338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:07.101 [2024-11-19 20:06:40.699347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:07.101 [2024-11-19 20:06:40.699355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.725993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.726196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:07.101 [2024-11-19 20:06:40.726238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.615 ms 00:17:07.101 [2024-11-19 20:06:40.726249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.101 [2024-11-19 20:06:40.726384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.101 [2024-11-19 20:06:40.726396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:07.101 [2024-11-19 20:06:40.726406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:07.102 [2024-11-19 20:06:40.726415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.102 [2024-11-19 20:06:40.727543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.102 [2024-11-19 20:06:40.731259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.765 ms, result 0 00:17:07.102 [2024-11-19 20:06:40.732163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.102 [2024-11-19 20:06:40.746449] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.047  [2024-11-19T20:06:42.778Z] Copying: 11/256 [MB] (11 MBps) [2024-11-19T20:06:44.164Z] Copying: 36/256 [MB] (24 MBps) [2024-11-19T20:06:45.104Z] Copying: 56/256 [MB] (20 MBps) [2024-11-19T20:06:46.047Z] Copying: 68/256 [MB] (12 MBps) [2024-11-19T20:06:46.992Z] Copying: 78/256 [MB] (10 MBps) [2024-11-19T20:06:47.934Z] Copying: 94/256 [MB] (15 MBps) [2024-11-19T20:06:48.878Z] Copying: 106/256 [MB] (12 MBps) [2024-11-19T20:06:49.822Z] Copying: 120/256 [MB] (13 MBps) [2024-11-19T20:06:50.814Z] Copying: 130/256 [MB] (10 MBps) [2024-11-19T20:06:51.759Z] Copying: 142/256 [MB] (11 MBps) [2024-11-19T20:06:53.149Z] Copying: 152/256 [MB] (10 MBps) [2024-11-19T20:06:54.136Z] Copying: 166224/262144 [kB] (10104 kBps) [2024-11-19T20:06:55.076Z] Copying: 172/256 [MB] (10 MBps) [2024-11-19T20:06:56.020Z] Copying: 186/256 [MB] (14 MBps) [2024-11-19T20:06:56.954Z] Copying: 203/256 [MB] (16 MBps) [2024-11-19T20:06:57.889Z] Copying: 227/256 [MB] (24 MBps) [2024-11-19T20:06:57.889Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-19 20:06:57.628700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.095 [2024-11-19 20:06:57.635979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.636087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.095 [2024-11-19 20:06:57.636101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.095 [2024-11-19 20:06:57.636108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.636126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.095 [2024-11-19 20:06:57.638211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.638246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.095 [2024-11-19 20:06:57.638254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:17:24.095 [2024-11-19 20:06:57.638260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.639739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.639766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.095 [2024-11-19 20:06:57.639774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:17:24.095 [2024-11-19 20:06:57.639780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.645750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.645775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.095 [2024-11-19 20:06:57.645786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.957 ms 00:17:24.095 [2024-11-19 20:06:57.645792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.651161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.651264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:24.095 [2024-11-19 20:06:57.651277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.344 ms 00:17:24.095 [2024-11-19 20:06:57.651283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.668313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.668337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.095 [2024-11-19 20:06:57.668346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.990 ms 00:17:24.095 [2024-11-19 20:06:57.668352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.679286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.679310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.095 [2024-11-19 20:06:57.679322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.908 ms 00:17:24.095 [2024-11-19 20:06:57.679330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.679421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.679428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.095 [2024-11-19 20:06:57.679435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:24.095 [2024-11-19 20:06:57.679441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.696756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.696780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:24.095 [2024-11-19 20:06:57.696788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.303 ms 00:17:24.095 [2024-11-19 20:06:57.696793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.714505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.714596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:24.095 [2024-11-19 20:06:57.714607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.679 ms 00:17:24.095 [2024-11-19 20:06:57.714612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.731841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.731872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.095 [2024-11-19 20:06:57.731880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.205 ms 00:17:24.095 [2024-11-19 20:06:57.731884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.748845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.095 [2024-11-19 20:06:57.748868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.095 [2024-11-19 20:06:57.748876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:17:24.095 [2024-11-19 20:06:57.748881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.095 [2024-11-19 20:06:57.748906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.095 [2024-11-19 20:06:57.748919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.748999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.095 [2024-11-19 20:06:57.749066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.096 [2024-11-19 20:06:57.749522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.096 [2024-11-19 20:06:57.749527] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:17:24.096 [2024-11-19 20:06:57.749533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.096 [2024-11-19 20:06:57.749538] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.096 [2024-11-19 20:06:57.749543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.096 [2024-11-19 20:06:57.749549] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.096 [2024-11-19 20:06:57.749554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.096 [2024-11-19 20:06:57.749560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.096 [2024-11-19 20:06:57.749566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.096 [2024-11-19 20:06:57.749570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.096 [2024-11-19 20:06:57.749575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.096 [2024-11-19 20:06:57.749581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.096 [2024-11-19 20:06:57.749586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.096 [2024-11-19 20:06:57.749594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:17:24.096 [2024-11-19 20:06:57.749599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.096 [2024-11-19 20:06:57.759108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.096 [2024-11-19 20:06:57.759132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.096 [2024-11-19 20:06:57.759139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.497 ms 00:17:24.096 [2024-11-19 20:06:57.759145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.759432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.097 [2024-11-19 20:06:57.759447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.097 [2024-11-19 20:06:57.759453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:24.097 [2024-11-19 20:06:57.759458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.786899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.097 [2024-11-19 20:06:57.786925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.097 [2024-11-19 20:06:57.786932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.097 [2024-11-19 20:06:57.786938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.786993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.097 [2024-11-19 20:06:57.787002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.097 [2024-11-19 20:06:57.787008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.097 [2024-11-19 20:06:57.787013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.787044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.097 [2024-11-19 20:06:57.787051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.097 [2024-11-19 20:06:57.787057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.097 [2024-11-19 20:06:57.787063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.787075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.097 [2024-11-19 20:06:57.787081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.097 [2024-11-19 20:06:57.787088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.097 [2024-11-19 20:06:57.787094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.097 [2024-11-19 20:06:57.846928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.097 [2024-11-19 20:06:57.846957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.097 [2024-11-19 20:06:57.846966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.097 [2024-11-19 20:06:57.846972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.355 [2024-11-19 20:06:57.895721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.895727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.355 [2024-11-19 20:06:57.895778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.895784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.355 [2024-11-19 20:06:57.895818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.895826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.355 [2024-11-19 20:06:57.895909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.895914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.355 [2024-11-19 20:06:57.895950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.895956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.895985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.895991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.355 [2024-11-19 20:06:57.895997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.896003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.896036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.355 [2024-11-19 20:06:57.896043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.355 [2024-11-19 20:06:57.896049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.355 [2024-11-19 20:06:57.896057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.355 [2024-11-19 20:06:57.896161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 260.163 ms, result 0 00:17:24.926 00:17:24.926 00:17:24.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.926 20:06:58 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73916 00:17:24.926 20:06:58 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73916 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73916 ']' 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.926 20:06:58 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.926 20:06:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:24.926 [2024-11-19 20:06:58.599680] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:24.926 [2024-11-19 20:06:58.599796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73916 ] 00:17:25.185 [2024-11-19 20:06:58.758936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.185 [2024-11-19 20:06:58.844633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.752 20:06:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.752 20:06:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:25.752 20:06:59 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:26.009 [2024-11-19 20:06:59.626412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.009 [2024-11-19 20:06:59.626458] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.009 [2024-11-19 20:06:59.790876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.009 [2024-11-19 20:06:59.790912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.009 [2024-11-19 20:06:59.790923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.009 [2024-11-19 20:06:59.790930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.009 [2024-11-19 20:06:59.793010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.009 [2024-11-19 20:06:59.793133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.009 [2024-11-19 20:06:59.793148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:17:26.009 [2024-11-19 20:06:59.793154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.009 [2024-11-19 20:06:59.793210] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.009 [2024-11-19 20:06:59.793779] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.009 [2024-11-19 20:06:59.793793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.009 [2024-11-19 20:06:59.793799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.009 [2024-11-19 20:06:59.793807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:17:26.010 [2024-11-19 20:06:59.793812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.010 [2024-11-19 20:06:59.794763] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.270 [2024-11-19 20:06:59.804371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.804399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.270 [2024-11-19 20:06:59.804408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.612 ms 00:17:26.270 [2024-11-19 20:06:59.804416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.804477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.804487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.270 [2024-11-19 20:06:59.804493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:26.270 [2024-11-19 20:06:59.804500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.808771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.808800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.270 [2024-11-19 20:06:59.808807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:17:26.270 [2024-11-19 20:06:59.808814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.808893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.808902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.270 [2024-11-19 20:06:59.808908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:26.270 [2024-11-19 20:06:59.808915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.808938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.808946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.270 [2024-11-19 20:06:59.808952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:26.270 [2024-11-19 20:06:59.808958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.808975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.270 [2024-11-19 20:06:59.811630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.811732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.270 [2024-11-19 20:06:59.811747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:17:26.270 [2024-11-19 20:06:59.811753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.811782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.811788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.270 [2024-11-19 20:06:59.811796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:26.270 [2024-11-19 20:06:59.811813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.811828] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.270 [2024-11-19 20:06:59.811842] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.270 [2024-11-19 20:06:59.811873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.270 [2024-11-19 20:06:59.811884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.270 [2024-11-19 20:06:59.811965] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.270 [2024-11-19 20:06:59.811973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.270 [2024-11-19 20:06:59.811983] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.270 [2024-11-19 20:06:59.811992] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.270 [2024-11-19 20:06:59.812000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.270 [2024-11-19 20:06:59.812007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.270 [2024-11-19 20:06:59.812014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.270 [2024-11-19 20:06:59.812019] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.270 [2024-11-19 20:06:59.812027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.270 [2024-11-19 20:06:59.812033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.812039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.270 [2024-11-19 20:06:59.812045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:26.270 [2024-11-19 20:06:59.812052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.812118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.270 [2024-11-19 20:06:59.812126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.270 [2024-11-19 20:06:59.812132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:26.270 [2024-11-19 20:06:59.812138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.270 [2024-11-19 20:06:59.812213] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.270 [2024-11-19 20:06:59.812236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.270 [2024-11-19 20:06:59.812243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.270 [2024-11-19 20:06:59.812251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.270 [2024-11-19 20:06:59.812263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.270 [2024-11-19 20:06:59.812276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.270 [2024-11-19 20:06:59.812282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.270 [2024-11-19 20:06:59.812294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.270 [2024-11-19 20:06:59.812301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.270 [2024-11-19 20:06:59.812306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.270 [2024-11-19 20:06:59.812312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.270 [2024-11-19 20:06:59.812317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.270 [2024-11-19 20:06:59.812323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.270 [2024-11-19 20:06:59.812334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.270 [2024-11-19 20:06:59.812341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.270 [2024-11-19 20:06:59.812357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.270 [2024-11-19 20:06:59.812363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.271 [2024-11-19 20:06:59.812375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.271 [2024-11-19 20:06:59.812391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.271 [2024-11-19 20:06:59.812408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.271 [2024-11-19 20:06:59.812424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.271 [2024-11-19 20:06:59.812437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.271 [2024-11-19 20:06:59.812444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.271 [2024-11-19 20:06:59.812449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.271 [2024-11-19 20:06:59.812455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.271 [2024-11-19 20:06:59.812459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.271 [2024-11-19 20:06:59.812474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.271 [2024-11-19 20:06:59.812485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.271 [2024-11-19 20:06:59.812490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812496] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.271 [2024-11-19 20:06:59.812504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.271 [2024-11-19 20:06:59.812512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.271 [2024-11-19 20:06:59.812524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.271 [2024-11-19 20:06:59.812529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.271 [2024-11-19 20:06:59.812535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.271 [2024-11-19 20:06:59.812541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.271 [2024-11-19 20:06:59.812547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.271 [2024-11-19 20:06:59.812552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.271 [2024-11-19 20:06:59.812559] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.271 [2024-11-19 20:06:59.812566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.271 [2024-11-19 20:06:59.812582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.271 [2024-11-19 20:06:59.812589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.271 [2024-11-19 20:06:59.812595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.271 [2024-11-19 20:06:59.812601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.271 [2024-11-19 20:06:59.812606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.271 [2024-11-19 20:06:59.812613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.271 [2024-11-19 20:06:59.812618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.271 [2024-11-19 20:06:59.812624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.271 [2024-11-19 20:06:59.812630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.271 [2024-11-19 20:06:59.812659] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.271 [2024-11-19 20:06:59.812669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.271 [2024-11-19 20:06:59.812683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.271 [2024-11-19 20:06:59.812690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.271 [2024-11-19 20:06:59.812695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.271 [2024-11-19 20:06:59.812702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.812707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.271 [2024-11-19 20:06:59.812714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:17:26.271 [2024-11-19 20:06:59.812720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.833270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.833295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.271 [2024-11-19 20:06:59.833304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.506 ms 00:17:26.271 [2024-11-19 20:06:59.833310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.833401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.833408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.271 [2024-11-19 20:06:59.833415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:26.271 [2024-11-19 20:06:59.833420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.856966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.856992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.271 [2024-11-19 20:06:59.857005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.527 ms 00:17:26.271 [2024-11-19 20:06:59.857011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.857054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.857062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.271 [2024-11-19 20:06:59.857070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:26.271 [2024-11-19 20:06:59.857075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.857370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.857382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.271 [2024-11-19 20:06:59.857390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:26.271 [2024-11-19 20:06:59.857397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.857501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.857509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.271 [2024-11-19 20:06:59.857516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:26.271 [2024-11-19 20:06:59.857521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.868918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.868943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.271 [2024-11-19 20:06:59.868951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.379 ms 00:17:26.271 [2024-11-19 20:06:59.868957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.878594] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:26.271 [2024-11-19 20:06:59.878704] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.271 [2024-11-19 20:06:59.878719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.878726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.271 [2024-11-19 20:06:59.878734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.687 ms 00:17:26.271 [2024-11-19 20:06:59.878739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.896879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.271 [2024-11-19 20:06:59.896905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.271 [2024-11-19 20:06:59.896915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.097 ms 00:17:26.271 [2024-11-19 20:06:59.896921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.271 [2024-11-19 20:06:59.905680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.905704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.272 [2024-11-19 20:06:59.905714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.691 ms 00:17:26.272 [2024-11-19 20:06:59.905720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.913982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.914014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.272 [2024-11-19 20:06:59.914023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.220 ms 00:17:26.272 [2024-11-19 20:06:59.914029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.914498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.914513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.272 [2024-11-19 20:06:59.914522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:17:26.272 [2024-11-19 20:06:59.914527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.971904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.971944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.272 [2024-11-19 20:06:59.971957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.356 ms 00:17:26.272 [2024-11-19 20:06:59.971964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.979715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.272 [2024-11-19 20:06:59.991057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.991089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.272 [2024-11-19 20:06:59.991100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.030 ms 00:17:26.272 [2024-11-19 20:06:59.991107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.991162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.991171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.272 [2024-11-19 20:06:59.991178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.272 [2024-11-19 20:06:59.991185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.991233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.991242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.272 [2024-11-19 20:06:59.991249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:26.272 [2024-11-19 20:06:59.991257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.991276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.991284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.272 [2024-11-19 20:06:59.991290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.272 [2024-11-19 20:06:59.991297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:06:59.991322] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.272 [2024-11-19 20:06:59.991332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:06:59.991338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.272 [2024-11-19 20:06:59.991347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:26.272 [2024-11-19 20:06:59.991353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:07:00.009400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:07:00.009524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.272 [2024-11-19 20:07:00.009542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.027 ms 00:17:26.272 [2024-11-19 20:07:00.009550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:07:00.009620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.272 [2024-11-19 20:07:00.009629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.272 [2024-11-19 20:07:00.009637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:26.272 [2024-11-19 20:07:00.009645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.272 [2024-11-19 20:07:00.010279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.272 [2024-11-19 20:07:00.012581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.162 ms, result 0 00:17:26.272 [2024-11-19 20:07:00.013319] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.272 Some configs were skipped because the RPC state that can call them passed over. 00:17:26.272 20:07:00 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:26.530 [2024-11-19 20:07:00.237340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.530 [2024-11-19 20:07:00.237374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:26.530 [2024-11-19 20:07:00.237383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:17:26.530 [2024-11-19 20:07:00.237391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.530 [2024-11-19 20:07:00.237416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.392 ms, result 0 00:17:26.530 true 00:17:26.530 20:07:00 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:26.789 [2024-11-19 20:07:00.438254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.789 [2024-11-19 20:07:00.438284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:26.789 [2024-11-19 20:07:00.438295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:17:26.789 [2024-11-19 20:07:00.438300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.789 [2024-11-19 20:07:00.438327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.099 ms, result 0 00:17:26.789 true 00:17:26.789 20:07:00 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73916 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73916 ']' 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73916 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73916 00:17:26.789 killing process with pid 73916 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73916' 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73916 00:17:26.789 20:07:00 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73916 00:17:27.358 [2024-11-19 20:07:01.002516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.002572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.358 [2024-11-19 20:07:01.002582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.358 [2024-11-19 20:07:01.002589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.002607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.358 [2024-11-19 20:07:01.004689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.004716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.358 [2024-11-19 20:07:01.004727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:17:27.358 [2024-11-19 20:07:01.004733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.004968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.004981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.358 [2024-11-19 20:07:01.004990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:27.358 [2024-11-19 20:07:01.004995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.008216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.008244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.358 [2024-11-19 20:07:01.008255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:17:27.358 [2024-11-19 20:07:01.008261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.013477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.013501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:27.358 [2024-11-19 20:07:01.013510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.189 ms 00:17:27.358 [2024-11-19 20:07:01.013516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.021124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.021151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.358 [2024-11-19 20:07:01.021161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:17:27.358 [2024-11-19 20:07:01.021172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.027430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.027456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.358 [2024-11-19 20:07:01.027467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:17:27.358 [2024-11-19 20:07:01.027473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.027578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.027586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.358 [2024-11-19 20:07:01.027594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:27.358 [2024-11-19 20:07:01.027599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.035169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.035194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:27.358 [2024-11-19 20:07:01.035202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.553 ms 00:17:27.358 [2024-11-19 20:07:01.035207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.042591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.042616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:27.358 [2024-11-19 20:07:01.042626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.332 ms 00:17:27.358 [2024-11-19 20:07:01.042632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.049402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.049428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.358 [2024-11-19 20:07:01.049438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.736 ms 00:17:27.358 [2024-11-19 20:07:01.049444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.056606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.358 [2024-11-19 20:07:01.056636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.358 [2024-11-19 20:07:01.056645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:17:27.358 [2024-11-19 20:07:01.056650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.358 [2024-11-19 20:07:01.056677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.358 [2024-11-19 20:07:01.056688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.358 [2024-11-19 20:07:01.056798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.056999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.359 [2024-11-19 20:07:01.057332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.359 [2024-11-19 20:07:01.057341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:17:27.359 [2024-11-19 20:07:01.057351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.359 [2024-11-19 20:07:01.057360] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.359 [2024-11-19 20:07:01.057366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.359 [2024-11-19 20:07:01.057373] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.359 [2024-11-19 20:07:01.057378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.359 [2024-11-19 20:07:01.057385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.360 [2024-11-19 20:07:01.057390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.360 [2024-11-19 20:07:01.057396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.360 [2024-11-19 20:07:01.057401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.360 [2024-11-19 20:07:01.057408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.360 [2024-11-19 20:07:01.057413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.360 [2024-11-19 20:07:01.057421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:17:27.360 [2024-11-19 20:07:01.057426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.066942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.360 [2024-11-19 20:07:01.066967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:27.360 [2024-11-19 20:07:01.066977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.488 ms 00:17:27.360 [2024-11-19 20:07:01.066983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.067268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.360 [2024-11-19 20:07:01.067281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:27.360 [2024-11-19 20:07:01.067289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:27.360 [2024-11-19 20:07:01.067296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.101989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.360 [2024-11-19 20:07:01.102014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.360 [2024-11-19 20:07:01.102023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.360 [2024-11-19 20:07:01.102029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.102104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.360 [2024-11-19 20:07:01.102111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.360 [2024-11-19 20:07:01.102119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.360 [2024-11-19 20:07:01.102126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.102162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.360 [2024-11-19 20:07:01.102170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.360 [2024-11-19 20:07:01.102179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.360 [2024-11-19 20:07:01.102184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.360 [2024-11-19 20:07:01.102199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.360 [2024-11-19 20:07:01.102205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.360 [2024-11-19 20:07:01.102212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.360 [2024-11-19 20:07:01.102217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.618 [2024-11-19 20:07:01.162424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.618 [2024-11-19 20:07:01.162453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.618 [2024-11-19 20:07:01.162463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.618 [2024-11-19 20:07:01.162470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.618 [2024-11-19 20:07:01.211522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.618 [2024-11-19 20:07:01.211554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.618 [2024-11-19 20:07:01.211564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.618 [2024-11-19 20:07:01.211572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.619 [2024-11-19 20:07:01.211651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.619 [2024-11-19 20:07:01.211695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.619 [2024-11-19 20:07:01.211786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:27.619 [2024-11-19 20:07:01.211832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.619 [2024-11-19 20:07:01.211883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.211924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.619 [2024-11-19 20:07:01.211931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.619 [2024-11-19 20:07:01.211939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.619 [2024-11-19 20:07:01.211944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.619 [2024-11-19 20:07:01.212049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.517 ms, result 0 00:17:28.186 20:07:01 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:28.186 20:07:01 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.186 [2024-11-19 20:07:01.790486] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:28.186 [2024-11-19 20:07:01.790919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73963 ] 00:17:28.186 [2024-11-19 20:07:01.947648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.445 [2024-11-19 20:07:02.030333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.445 [2024-11-19 20:07:02.233962] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.445 [2024-11-19 20:07:02.234014] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.704 [2024-11-19 20:07:02.385413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.704 [2024-11-19 20:07:02.385445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.704 [2024-11-19 20:07:02.385455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.704 [2024-11-19 20:07:02.385468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.704 [2024-11-19 20:07:02.387477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.704 [2024-11-19 20:07:02.387505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.704 [2024-11-19 20:07:02.387513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:17:28.704 [2024-11-19 20:07:02.387519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.704 [2024-11-19 20:07:02.387573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.704 [2024-11-19 20:07:02.388116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.704 [2024-11-19 20:07:02.388133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.704 [2024-11-19 20:07:02.388139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.704 [2024-11-19 20:07:02.388146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:17:28.704 [2024-11-19 20:07:02.388151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.704 [2024-11-19 20:07:02.389132] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:28.704 [2024-11-19 20:07:02.398739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.704 [2024-11-19 20:07:02.398770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:28.705 [2024-11-19 20:07:02.398779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.608 ms 00:17:28.705 [2024-11-19 20:07:02.398785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.398852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.398860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:28.705 [2024-11-19 20:07:02.398867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:28.705 [2024-11-19 20:07:02.398873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.403107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.403131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.705 [2024-11-19 20:07:02.403138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:17:28.705 [2024-11-19 20:07:02.403144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.403209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.403217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.705 [2024-11-19 20:07:02.403233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:28.705 [2024-11-19 20:07:02.403239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.403255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.403263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.705 [2024-11-19 20:07:02.403269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.705 [2024-11-19 20:07:02.403276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.403293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.705 [2024-11-19 20:07:02.405886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.405913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.705 [2024-11-19 20:07:02.405920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:28.705 [2024-11-19 20:07:02.405926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.405957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.405964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.705 [2024-11-19 20:07:02.405970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.705 [2024-11-19 20:07:02.405976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.405994] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:28.705 [2024-11-19 20:07:02.406012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:28.705 [2024-11-19 20:07:02.406045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:28.705 [2024-11-19 20:07:02.406059] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:28.705 [2024-11-19 20:07:02.406137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.705 [2024-11-19 20:07:02.406145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.705 [2024-11-19 20:07:02.406153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.705 [2024-11-19 20:07:02.406160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.705 [2024-11-19 20:07:02.406180] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.705 [2024-11-19 20:07:02.406186] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.705 [2024-11-19 20:07:02.406191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.705 [2024-11-19 20:07:02.406197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.406203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.705 [2024-11-19 20:07:02.406209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:17:28.705 [2024-11-19 20:07:02.406214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.406292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.705 [2024-11-19 20:07:02.406303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.705 [2024-11-19 20:07:02.406311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:28.705 [2024-11-19 20:07:02.406317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.705 [2024-11-19 20:07:02.406406] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.705 [2024-11-19 20:07:02.406415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.705 [2024-11-19 20:07:02.406421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.705 [2024-11-19 20:07:02.406438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.705 [2024-11-19 20:07:02.406456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.705 [2024-11-19 20:07:02.406467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.705 [2024-11-19 20:07:02.406472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.705 [2024-11-19 20:07:02.406477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.705 [2024-11-19 20:07:02.406486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.705 [2024-11-19 20:07:02.406492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.705 [2024-11-19 20:07:02.406496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.705 [2024-11-19 20:07:02.406507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.705 [2024-11-19 20:07:02.406522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.705 [2024-11-19 20:07:02.406536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.705 [2024-11-19 20:07:02.406551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.705 [2024-11-19 20:07:02.406566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.705 [2024-11-19 20:07:02.406581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.705 [2024-11-19 20:07:02.406590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.705 [2024-11-19 20:07:02.406595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.705 [2024-11-19 20:07:02.406600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.705 [2024-11-19 20:07:02.406605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.705 [2024-11-19 20:07:02.406610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.705 [2024-11-19 20:07:02.406614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.705 [2024-11-19 20:07:02.406627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.705 [2024-11-19 20:07:02.406632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406637] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.705 [2024-11-19 20:07:02.406643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.705 [2024-11-19 20:07:02.406648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.705 [2024-11-19 20:07:02.406660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.705 [2024-11-19 20:07:02.406665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.705 [2024-11-19 20:07:02.406670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.705 [2024-11-19 20:07:02.406675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.705 [2024-11-19 20:07:02.406680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.705 [2024-11-19 20:07:02.406685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.705 [2024-11-19 20:07:02.406691] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.705 [2024-11-19 20:07:02.406697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.706 [2024-11-19 20:07:02.406709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.706 [2024-11-19 20:07:02.406715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.706 [2024-11-19 20:07:02.406720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.706 [2024-11-19 20:07:02.406725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.706 [2024-11-19 20:07:02.406730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.706 [2024-11-19 20:07:02.406736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.706 [2024-11-19 20:07:02.406741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.706 [2024-11-19 20:07:02.406746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.706 [2024-11-19 20:07:02.406751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.706 [2024-11-19 20:07:02.406778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.706 [2024-11-19 20:07:02.406784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.706 [2024-11-19 20:07:02.406797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.706 [2024-11-19 20:07:02.406803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.706 [2024-11-19 20:07:02.406808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.706 [2024-11-19 20:07:02.406814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.406819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.706 [2024-11-19 20:07:02.406826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:28.706 [2024-11-19 20:07:02.406832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.427378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.427405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.706 [2024-11-19 20:07:02.427413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.508 ms 00:17:28.706 [2024-11-19 20:07:02.427419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.427511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.427519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.706 [2024-11-19 20:07:02.427525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:28.706 [2024-11-19 20:07:02.427531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.463209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.463248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.706 [2024-11-19 20:07:02.463259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.662 ms 00:17:28.706 [2024-11-19 20:07:02.463266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.463323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.463333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.706 [2024-11-19 20:07:02.463340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.706 [2024-11-19 20:07:02.463345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.463621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.463641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.706 [2024-11-19 20:07:02.463648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:28.706 [2024-11-19 20:07:02.463657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.463760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.463767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.706 [2024-11-19 20:07:02.463773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:28.706 [2024-11-19 20:07:02.463778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.474484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.474508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.706 [2024-11-19 20:07:02.474516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.690 ms 00:17:28.706 [2024-11-19 20:07:02.474522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.706 [2024-11-19 20:07:02.484254] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:28.706 [2024-11-19 20:07:02.484282] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:28.706 [2024-11-19 20:07:02.484290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.706 [2024-11-19 20:07:02.484297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:28.706 [2024-11-19 20:07:02.484303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:17:28.706 [2024-11-19 20:07:02.484309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.502784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.502819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.968 [2024-11-19 20:07:02.502828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.429 ms 00:17:28.968 [2024-11-19 20:07:02.502833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.511804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.511829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.968 [2024-11-19 20:07:02.511836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.918 ms 00:17:28.968 [2024-11-19 20:07:02.511842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.520486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.520510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.968 [2024-11-19 20:07:02.520517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.605 ms 00:17:28.968 [2024-11-19 20:07:02.520522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.520975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.520995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.968 [2024-11-19 20:07:02.521002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:28.968 [2024-11-19 20:07:02.521008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.563635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.563676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.968 [2024-11-19 20:07:02.563686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.609 ms 00:17:28.968 [2024-11-19 20:07:02.563693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.571525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.968 [2024-11-19 20:07:02.582553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.582580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.968 [2024-11-19 20:07:02.582590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.800 ms 00:17:28.968 [2024-11-19 20:07:02.582599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.582666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.582674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.968 [2024-11-19 20:07:02.582681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.968 [2024-11-19 20:07:02.582686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.582721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.582727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.968 [2024-11-19 20:07:02.582734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:28.968 [2024-11-19 20:07:02.582742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.582762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.582769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.968 [2024-11-19 20:07:02.582775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.968 [2024-11-19 20:07:02.582780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.582804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.968 [2024-11-19 20:07:02.582812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.582818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.968 [2024-11-19 20:07:02.582824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.968 [2024-11-19 20:07:02.582829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.600709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.600738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.968 [2024-11-19 20:07:02.600746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:17:28.968 [2024-11-19 20:07:02.600752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.600822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.968 [2024-11-19 20:07:02.600831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.968 [2024-11-19 20:07:02.600837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:28.968 [2024-11-19 20:07:02.600843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.968 [2024-11-19 20:07:02.601447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.968 [2024-11-19 20:07:02.603709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.813 ms, result 0 00:17:28.968 [2024-11-19 20:07:02.604412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.968 [2024-11-19 20:07:02.619089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.914  [2024-11-19T20:07:04.653Z] Copying: 21/256 [MB] (21 MBps) [2024-11-19T20:07:06.036Z] Copying: 40/256 [MB] (19 MBps) [2024-11-19T20:07:06.976Z] Copying: 58/256 [MB] (18 MBps) [2024-11-19T20:07:07.917Z] Copying: 72/256 [MB] (14 MBps) [2024-11-19T20:07:08.859Z] Copying: 86/256 [MB] (13 MBps) [2024-11-19T20:07:09.795Z] Copying: 106/256 [MB] (20 MBps) [2024-11-19T20:07:10.740Z] Copying: 128/256 [MB] (22 MBps) [2024-11-19T20:07:11.686Z] Copying: 148/256 [MB] (20 MBps) [2024-11-19T20:07:12.632Z] Copying: 170/256 [MB] (21 MBps) [2024-11-19T20:07:14.019Z] Copying: 191/256 [MB] (20 MBps) [2024-11-19T20:07:14.971Z] Copying: 207/256 [MB] (16 MBps) [2024-11-19T20:07:15.644Z] Copying: 218/256 [MB] (10 MBps) [2024-11-19T20:07:17.034Z] Copying: 232/256 [MB] (13 MBps) [2024-11-19T20:07:17.979Z] Copying: 243/256 [MB] (10 MBps) [2024-11-19T20:07:17.979Z] Copying: 255/256 [MB] (11 MBps) [2024-11-19T20:07:17.979Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-19 20:07:17.715336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.185 [2024-11-19 20:07:17.726059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.726135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.185 [2024-11-19 20:07:17.726160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.185 [2024-11-19 20:07:17.726186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.726236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:44.185 [2024-11-19 20:07:17.729636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.729678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.185 [2024-11-19 20:07:17.729690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:17:44.185 [2024-11-19 20:07:17.729699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.729973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.729990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.185 [2024-11-19 20:07:17.730000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:17:44.185 [2024-11-19 20:07:17.730008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.733714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.733744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.185 [2024-11-19 20:07:17.733754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:17:44.185 [2024-11-19 20:07:17.733761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.740740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.740783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.185 [2024-11-19 20:07:17.740795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.960 ms 00:17:44.185 [2024-11-19 20:07:17.740803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.767850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.767904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.185 [2024-11-19 20:07:17.767917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.979 ms 00:17:44.185 [2024-11-19 20:07:17.767925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.784486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.784541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.185 [2024-11-19 20:07:17.784564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:17:44.185 [2024-11-19 20:07:17.784573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.784753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.784765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.185 [2024-11-19 20:07:17.784775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:44.185 [2024-11-19 20:07:17.784782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.185 [2024-11-19 20:07:17.811436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.185 [2024-11-19 20:07:17.811488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:44.185 [2024-11-19 20:07:17.811499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.625 ms 00:17:44.185 [2024-11-19 20:07:17.811507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-19 20:07:17.837777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-19 20:07:17.837825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:44.186 [2024-11-19 20:07:17.837837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.203 ms 00:17:44.186 [2024-11-19 20:07:17.837844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-19 20:07:17.863006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-19 20:07:17.863056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.186 [2024-11-19 20:07:17.863068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.095 ms 00:17:44.186 [2024-11-19 20:07:17.863075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-19 20:07:17.888488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-19 20:07:17.888696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.186 [2024-11-19 20:07:17.888720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.314 ms 00:17:44.186 [2024-11-19 20:07:17.888728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-19 20:07:17.888853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.186 [2024-11-19 20:07:17.888871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.888999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.186 [2024-11-19 20:07:17.889366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.187 [2024-11-19 20:07:17.889705] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.187 [2024-11-19 20:07:17.889714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:17:44.187 [2024-11-19 20:07:17.889723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.187 [2024-11-19 20:07:17.889732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.187 [2024-11-19 20:07:17.889740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.187 [2024-11-19 20:07:17.889748] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.187 [2024-11-19 20:07:17.889755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.187 [2024-11-19 20:07:17.889763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.187 [2024-11-19 20:07:17.889774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.187 [2024-11-19 20:07:17.889781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.187 [2024-11-19 20:07:17.889787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.187 [2024-11-19 20:07:17.889795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-19 20:07:17.889803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.187 [2024-11-19 20:07:17.889813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:17:44.187 [2024-11-19 20:07:17.889820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.903650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-19 20:07:17.903825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.187 [2024-11-19 20:07:17.903843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:17:44.187 [2024-11-19 20:07:17.903852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.904303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-19 20:07:17.904318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.187 [2024-11-19 20:07:17.904340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:44.187 [2024-11-19 20:07:17.904348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.943770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.187 [2024-11-19 20:07:17.943825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.187 [2024-11-19 20:07:17.943836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.187 [2024-11-19 20:07:17.943845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.943933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.187 [2024-11-19 20:07:17.943942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.187 [2024-11-19 20:07:17.943951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.187 [2024-11-19 20:07:17.943959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.944019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.187 [2024-11-19 20:07:17.944029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.187 [2024-11-19 20:07:17.944038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.187 [2024-11-19 20:07:17.944046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-19 20:07:17.944066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.187 [2024-11-19 20:07:17.944075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.187 [2024-11-19 20:07:17.944083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.187 [2024-11-19 20:07:17.944091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.029182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.029254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.449 [2024-11-19 20:07:18.029269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.029277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.449 [2024-11-19 20:07:18.099409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.449 [2024-11-19 20:07:18.099505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.449 [2024-11-19 20:07:18.099570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.449 [2024-11-19 20:07:18.099700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.449 [2024-11-19 20:07:18.099767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.449 [2024-11-19 20:07:18.099840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.099898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.449 [2024-11-19 20:07:18.099913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.449 [2024-11-19 20:07:18.099923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.449 [2024-11-19 20:07:18.099931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-19 20:07:18.100089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.035 ms, result 0 00:17:45.393 00:17:45.393 00:17:45.394 20:07:18 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:45.394 20:07:18 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:45.656 20:07:19 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.916 [2024-11-19 20:07:19.504256] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:45.916 [2024-11-19 20:07:19.504409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:17:45.916 [2024-11-19 20:07:19.669698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.178 [2024-11-19 20:07:19.792633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.439 [2024-11-19 20:07:20.082932] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.439 [2024-11-19 20:07:20.083025] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.700 [2024-11-19 20:07:20.246262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.246500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.700 [2024-11-19 20:07:20.246525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.700 [2024-11-19 20:07:20.246537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.249545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.249727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.700 [2024-11-19 20:07:20.249748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:17:46.700 [2024-11-19 20:07:20.249757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.249879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.700 [2024-11-19 20:07:20.250637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.700 [2024-11-19 20:07:20.250657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.250666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.700 [2024-11-19 20:07:20.250676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:17:46.700 [2024-11-19 20:07:20.250684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.252458] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.700 [2024-11-19 20:07:20.266849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.266910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.700 [2024-11-19 20:07:20.266925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.393 ms 00:17:46.700 [2024-11-19 20:07:20.266933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.267063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.267075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.700 [2024-11-19 20:07:20.267085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:46.700 [2024-11-19 20:07:20.267092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.275618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.275666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.700 [2024-11-19 20:07:20.275677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.481 ms 00:17:46.700 [2024-11-19 20:07:20.275685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.275797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.275809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.700 [2024-11-19 20:07:20.275817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:46.700 [2024-11-19 20:07:20.275826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.700 [2024-11-19 20:07:20.275853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.700 [2024-11-19 20:07:20.275864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.700 [2024-11-19 20:07:20.275873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:46.701 [2024-11-19 20:07:20.275881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.701 [2024-11-19 20:07:20.275904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.701 [2024-11-19 20:07:20.280156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.701 [2024-11-19 20:07:20.280199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.701 [2024-11-19 20:07:20.280211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:17:46.701 [2024-11-19 20:07:20.280219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.701 [2024-11-19 20:07:20.280323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.701 [2024-11-19 20:07:20.280333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.701 [2024-11-19 20:07:20.280342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:46.701 [2024-11-19 20:07:20.280351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.701 [2024-11-19 20:07:20.280373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.701 [2024-11-19 20:07:20.280399] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.701 [2024-11-19 20:07:20.280444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.701 [2024-11-19 20:07:20.280460] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.701 [2024-11-19 20:07:20.280567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.701 [2024-11-19 20:07:20.280579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.701 [2024-11-19 20:07:20.280590] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.701 [2024-11-19 20:07:20.280601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.701 [2024-11-19 20:07:20.280613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.701 [2024-11-19 20:07:20.280621] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.701 [2024-11-19 20:07:20.280630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.701 [2024-11-19 20:07:20.280637] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.701 [2024-11-19 20:07:20.280644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.701 [2024-11-19 20:07:20.280652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.701 [2024-11-19 20:07:20.280659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.701 [2024-11-19 20:07:20.280667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:46.701 [2024-11-19 20:07:20.280675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.701 [2024-11-19 20:07:20.280762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.701 [2024-11-19 20:07:20.280771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.701 [2024-11-19 20:07:20.280781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:46.701 [2024-11-19 20:07:20.280789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.701 [2024-11-19 20:07:20.280891] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.701 [2024-11-19 20:07:20.280900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.701 [2024-11-19 20:07:20.280909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.701 [2024-11-19 20:07:20.280918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.280925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.701 [2024-11-19 20:07:20.280935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.280943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.701 [2024-11-19 20:07:20.280950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.701 [2024-11-19 20:07:20.280958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.701 [2024-11-19 20:07:20.280965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.701 [2024-11-19 20:07:20.280972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.701 [2024-11-19 20:07:20.280979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.701 [2024-11-19 20:07:20.280986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.701 [2024-11-19 20:07:20.281000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.701 [2024-11-19 20:07:20.281007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.701 [2024-11-19 20:07:20.281014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.701 [2024-11-19 20:07:20.281027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.701 [2024-11-19 20:07:20.281047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.701 [2024-11-19 20:07:20.281067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.701 [2024-11-19 20:07:20.281089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.701 [2024-11-19 20:07:20.281109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.701 [2024-11-19 20:07:20.281130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.701 [2024-11-19 20:07:20.281143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.701 [2024-11-19 20:07:20.281149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.701 [2024-11-19 20:07:20.281156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.701 [2024-11-19 20:07:20.281165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.701 [2024-11-19 20:07:20.281173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.701 [2024-11-19 20:07:20.281179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.701 [2024-11-19 20:07:20.281193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.701 [2024-11-19 20:07:20.281200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.701 [2024-11-19 20:07:20.281214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.701 [2024-11-19 20:07:20.281238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.701 [2024-11-19 20:07:20.281250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.701 [2024-11-19 20:07:20.281259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.701 [2024-11-19 20:07:20.281266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.701 [2024-11-19 20:07:20.281272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.702 [2024-11-19 20:07:20.281279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.702 [2024-11-19 20:07:20.281286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.702 [2024-11-19 20:07:20.281293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.702 [2024-11-19 20:07:20.281303] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.702 [2024-11-19 20:07:20.281312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.702 [2024-11-19 20:07:20.281329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.702 [2024-11-19 20:07:20.281336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.702 [2024-11-19 20:07:20.281344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.702 [2024-11-19 20:07:20.281351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.702 [2024-11-19 20:07:20.281358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.702 [2024-11-19 20:07:20.281365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.702 [2024-11-19 20:07:20.281374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.702 [2024-11-19 20:07:20.281381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.702 [2024-11-19 20:07:20.281390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.702 [2024-11-19 20:07:20.281426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.702 [2024-11-19 20:07:20.281435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.702 [2024-11-19 20:07:20.281452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.702 [2024-11-19 20:07:20.281459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.702 [2024-11-19 20:07:20.281466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.702 [2024-11-19 20:07:20.281473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.281481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.702 [2024-11-19 20:07:20.281492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:17:46.702 [2024-11-19 20:07:20.281499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.316140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.316199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.702 [2024-11-19 20:07:20.316212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.575 ms 00:17:46.702 [2024-11-19 20:07:20.316238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.316391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.316409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.702 [2024-11-19 20:07:20.316419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:46.702 [2024-11-19 20:07:20.316427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.372345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.372411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.702 [2024-11-19 20:07:20.372426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.888 ms 00:17:46.702 [2024-11-19 20:07:20.372441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.372586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.372599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.702 [2024-11-19 20:07:20.372610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.702 [2024-11-19 20:07:20.372618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.373190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.373250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.702 [2024-11-19 20:07:20.373263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:46.702 [2024-11-19 20:07:20.373282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.373442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.373454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.702 [2024-11-19 20:07:20.373464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:46.702 [2024-11-19 20:07:20.373472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.390635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.390686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.702 [2024-11-19 20:07:20.390699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.138 ms 00:17:46.702 [2024-11-19 20:07:20.390708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.405320] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.702 [2024-11-19 20:07:20.405373] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.702 [2024-11-19 20:07:20.405389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.405398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.702 [2024-11-19 20:07:20.405409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:17:46.702 [2024-11-19 20:07:20.405417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.432571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.432638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.702 [2024-11-19 20:07:20.432652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.013 ms 00:17:46.702 [2024-11-19 20:07:20.432662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.445989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.446043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.702 [2024-11-19 20:07:20.446055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.207 ms 00:17:46.702 [2024-11-19 20:07:20.446063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.459207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.459274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.702 [2024-11-19 20:07:20.459287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.042 ms 00:17:46.702 [2024-11-19 20:07:20.459296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.702 [2024-11-19 20:07:20.460261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.702 [2024-11-19 20:07:20.460310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.702 [2024-11-19 20:07:20.460328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:17:46.702 [2024-11-19 20:07:20.460343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.529566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.529644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.964 [2024-11-19 20:07:20.529662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.173 ms 00:17:46.964 [2024-11-19 20:07:20.529672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.542004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.964 [2024-11-19 20:07:20.562211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.562276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.964 [2024-11-19 20:07:20.562291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.382 ms 00:17:46.964 [2024-11-19 20:07:20.562300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.562414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.562426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.964 [2024-11-19 20:07:20.562437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:46.964 [2024-11-19 20:07:20.562445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.562503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.562513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.964 [2024-11-19 20:07:20.562521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:46.964 [2024-11-19 20:07:20.562529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.562557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.562568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.964 [2024-11-19 20:07:20.562576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:46.964 [2024-11-19 20:07:20.562584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.964 [2024-11-19 20:07:20.562622] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.964 [2024-11-19 20:07:20.562633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.964 [2024-11-19 20:07:20.562642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.964 [2024-11-19 20:07:20.562650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:46.964 [2024-11-19 20:07:20.562660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.965 [2024-11-19 20:07:20.589000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.965 [2024-11-19 20:07:20.589055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.965 [2024-11-19 20:07:20.589070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.316 ms 00:17:46.965 [2024-11-19 20:07:20.589078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.965 [2024-11-19 20:07:20.589239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.965 [2024-11-19 20:07:20.589253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.965 [2024-11-19 20:07:20.589264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:46.965 [2024-11-19 20:07:20.589272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.965 [2024-11-19 20:07:20.590671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.965 [2024-11-19 20:07:20.594258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.957 ms, result 0 00:17:46.965 [2024-11-19 20:07:20.595732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.965 [2024-11-19 20:07:20.609710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.226  [2024-11-19T20:07:21.020Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-19 20:07:20.875783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.226 [2024-11-19 20:07:20.885421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.226 [2024-11-19 20:07:20.885474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:47.226 [2024-11-19 20:07:20.885488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:47.226 [2024-11-19 20:07:20.885506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.226 [2024-11-19 20:07:20.885568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:47.226 [2024-11-19 20:07:20.888546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.226 [2024-11-19 20:07:20.888589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:47.227 [2024-11-19 20:07:20.888600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:17:47.227 [2024-11-19 20:07:20.888609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.891591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.891640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:47.227 [2024-11-19 20:07:20.891651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:17:47.227 [2024-11-19 20:07:20.891659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.896031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.896078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:47.227 [2024-11-19 20:07:20.896088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:17:47.227 [2024-11-19 20:07:20.896096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.903027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.903073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:47.227 [2024-11-19 20:07:20.903084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:17:47.227 [2024-11-19 20:07:20.903092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.928976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.929028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:47.227 [2024-11-19 20:07:20.929042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.835 ms 00:17:47.227 [2024-11-19 20:07:20.929049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.945309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.945369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:47.227 [2024-11-19 20:07:20.945385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.207 ms 00:17:47.227 [2024-11-19 20:07:20.945392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.945568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.945581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:47.227 [2024-11-19 20:07:20.945590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:47.227 [2024-11-19 20:07:20.945598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.972207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.972275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:47.227 [2024-11-19 20:07:20.972287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.581 ms 00:17:47.227 [2024-11-19 20:07:20.972294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.227 [2024-11-19 20:07:20.998344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.227 [2024-11-19 20:07:20.998396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:47.227 [2024-11-19 20:07:20.998408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.984 ms 00:17:47.227 [2024-11-19 20:07:20.998415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.490 [2024-11-19 20:07:21.024433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.490 [2024-11-19 20:07:21.024492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:47.490 [2024-11-19 20:07:21.024504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.965 ms 00:17:47.490 [2024-11-19 20:07:21.024511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.490 [2024-11-19 20:07:21.049957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.490 [2024-11-19 20:07:21.050007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:47.490 [2024-11-19 20:07:21.050018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.349 ms 00:17:47.490 [2024-11-19 20:07:21.050025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.490 [2024-11-19 20:07:21.050077] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:47.490 [2024-11-19 20:07:21.050094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:47.490 [2024-11-19 20:07:21.050143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:47.491 [2024-11-19 20:07:21.050820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:47.492 [2024-11-19 20:07:21.050884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:47.492 [2024-11-19 20:07:21.050892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:17:47.492 [2024-11-19 20:07:21.050900] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:47.492 [2024-11-19 20:07:21.050908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:47.492 [2024-11-19 20:07:21.050916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:47.492 [2024-11-19 20:07:21.050924] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:47.492 [2024-11-19 20:07:21.050931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:47.492 [2024-11-19 20:07:21.050939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:47.492 [2024-11-19 20:07:21.050946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:47.492 [2024-11-19 20:07:21.050953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:47.492 [2024-11-19 20:07:21.050958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:47.492 [2024-11-19 20:07:21.050967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.492 [2024-11-19 20:07:21.050978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:47.492 [2024-11-19 20:07:21.050987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:17:47.492 [2024-11-19 20:07:21.050994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.064628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.492 [2024-11-19 20:07:21.064672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:47.492 [2024-11-19 20:07:21.064684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.601 ms 00:17:47.492 [2024-11-19 20:07:21.064691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.065105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.492 [2024-11-19 20:07:21.065130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:47.492 [2024-11-19 20:07:21.065139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:17:47.492 [2024-11-19 20:07:21.065146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.104568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.104618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.492 [2024-11-19 20:07:21.104630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.104638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.104748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.104758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.492 [2024-11-19 20:07:21.104767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.104774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.104827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.104837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.492 [2024-11-19 20:07:21.104845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.104853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.104870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.104881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.492 [2024-11-19 20:07:21.104888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.104896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.191002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.191065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.492 [2024-11-19 20:07:21.191079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.191088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.261738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.261797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.492 [2024-11-19 20:07:21.261810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.261819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.261905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.261915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.492 [2024-11-19 20:07:21.261924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.261932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.261966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.261977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.492 [2024-11-19 20:07:21.261991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.261999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.262103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.262115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.492 [2024-11-19 20:07:21.262124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.262132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.262167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.262177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:47.492 [2024-11-19 20:07:21.262186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.262197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.262263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.262274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.492 [2024-11-19 20:07:21.262283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.262291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.262341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.492 [2024-11-19 20:07:21.262352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.492 [2024-11-19 20:07:21.262364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.492 [2024-11-19 20:07:21.262373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.492 [2024-11-19 20:07:21.262531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.102 ms, result 0 00:17:48.437 00:17:48.437 00:17:48.437 20:07:22 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74185 00:17:48.437 20:07:22 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74185 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74185 ']' 00:17:48.437 20:07:22 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.437 20:07:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:48.698 [2024-11-19 20:07:22.263238] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:48.698 [2024-11-19 20:07:22.263406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74185 ] 00:17:48.698 [2024-11-19 20:07:22.429252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.959 [2024-11-19 20:07:22.552030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.534 20:07:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.534 20:07:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:49.534 20:07:23 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:49.796 [2024-11-19 20:07:23.446702] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.796 [2024-11-19 20:07:23.446778] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.060 [2024-11-19 20:07:23.614388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.614453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.060 [2024-11-19 20:07:23.614470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:50.060 [2024-11-19 20:07:23.614480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.617450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.617503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.060 [2024-11-19 20:07:23.617516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:17:50.060 [2024-11-19 20:07:23.617524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.617677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.060 [2024-11-19 20:07:23.618457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.060 [2024-11-19 20:07:23.618495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.618504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.060 [2024-11-19 20:07:23.618516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:17:50.060 [2024-11-19 20:07:23.618524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.621137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:50.060 [2024-11-19 20:07:23.635725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.635789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:50.060 [2024-11-19 20:07:23.635804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.599 ms 00:17:50.060 [2024-11-19 20:07:23.635815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.635942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.635957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:50.060 [2024-11-19 20:07:23.635966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:50.060 [2024-11-19 20:07:23.635976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.644469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.644519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.060 [2024-11-19 20:07:23.644530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.431 ms 00:17:50.060 [2024-11-19 20:07:23.644541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.644663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.644677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.060 [2024-11-19 20:07:23.644687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:50.060 [2024-11-19 20:07:23.644711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.644745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.644755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.060 [2024-11-19 20:07:23.644764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:50.060 [2024-11-19 20:07:23.644773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.644799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.060 [2024-11-19 20:07:23.648894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.648932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.060 [2024-11-19 20:07:23.648945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.099 ms 00:17:50.060 [2024-11-19 20:07:23.648953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.649033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.649043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.060 [2024-11-19 20:07:23.649054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:50.060 [2024-11-19 20:07:23.649065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.649088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.060 [2024-11-19 20:07:23.649109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:50.060 [2024-11-19 20:07:23.649156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.060 [2024-11-19 20:07:23.649172] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:50.060 [2024-11-19 20:07:23.649304] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.060 [2024-11-19 20:07:23.649316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.060 [2024-11-19 20:07:23.649332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.060 [2024-11-19 20:07:23.649345] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.060 [2024-11-19 20:07:23.649357] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.060 [2024-11-19 20:07:23.649366] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.060 [2024-11-19 20:07:23.649375] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.060 [2024-11-19 20:07:23.649383] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.060 [2024-11-19 20:07:23.649395] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.060 [2024-11-19 20:07:23.649404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.649413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.060 [2024-11-19 20:07:23.649421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:17:50.060 [2024-11-19 20:07:23.649431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.649521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.060 [2024-11-19 20:07:23.649545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.060 [2024-11-19 20:07:23.649553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:50.060 [2024-11-19 20:07:23.649562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.060 [2024-11-19 20:07:23.649669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.060 [2024-11-19 20:07:23.649682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.060 [2024-11-19 20:07:23.649691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.060 [2024-11-19 20:07:23.649701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.060 [2024-11-19 20:07:23.649709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.060 [2024-11-19 20:07:23.649718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.060 [2024-11-19 20:07:23.649725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.061 [2024-11-19 20:07:23.649745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.061 [2024-11-19 20:07:23.649760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.061 [2024-11-19 20:07:23.649769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.061 [2024-11-19 20:07:23.649775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.061 [2024-11-19 20:07:23.649786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.061 [2024-11-19 20:07:23.649793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:50.061 [2024-11-19 20:07:23.649802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.061 [2024-11-19 20:07:23.649819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.061 [2024-11-19 20:07:23.649850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.061 [2024-11-19 20:07:23.649876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.061 [2024-11-19 20:07:23.649899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.061 [2024-11-19 20:07:23.649923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.061 [2024-11-19 20:07:23.649937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.061 [2024-11-19 20:07:23.649945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:50.061 [2024-11-19 20:07:23.649954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.061 [2024-11-19 20:07:23.649961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.061 [2024-11-19 20:07:23.649969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:50.061 [2024-11-19 20:07:23.649975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.061 [2024-11-19 20:07:23.649984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.061 [2024-11-19 20:07:23.649991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:50.061 [2024-11-19 20:07:23.650001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.061 [2024-11-19 20:07:23.650008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.061 [2024-11-19 20:07:23.650016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:50.061 [2024-11-19 20:07:23.650023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.061 [2024-11-19 20:07:23.650031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.061 [2024-11-19 20:07:23.650039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.061 [2024-11-19 20:07:23.650054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.061 [2024-11-19 20:07:23.650062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.061 [2024-11-19 20:07:23.650072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.061 [2024-11-19 20:07:23.650079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.061 [2024-11-19 20:07:23.650088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.061 [2024-11-19 20:07:23.650095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.061 [2024-11-19 20:07:23.650104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.061 [2024-11-19 20:07:23.650112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.061 [2024-11-19 20:07:23.650122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.061 [2024-11-19 20:07:23.650131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.061 [2024-11-19 20:07:23.650151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:50.061 [2024-11-19 20:07:23.650162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.061 [2024-11-19 20:07:23.650169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:50.061 [2024-11-19 20:07:23.650179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:50.061 [2024-11-19 20:07:23.650186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:50.061 [2024-11-19 20:07:23.650195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:50.061 [2024-11-19 20:07:23.650204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:50.061 [2024-11-19 20:07:23.650213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:50.061 [2024-11-19 20:07:23.650233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:50.061 [2024-11-19 20:07:23.650277] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.061 [2024-11-19 20:07:23.650286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.061 [2024-11-19 20:07:23.650306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.061 [2024-11-19 20:07:23.650315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.061 [2024-11-19 20:07:23.650323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.061 [2024-11-19 20:07:23.650332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.650341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.061 [2024-11-19 20:07:23.650352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:50.061 [2024-11-19 20:07:23.650360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.683370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.683419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.061 [2024-11-19 20:07:23.683433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.945 ms 00:17:50.061 [2024-11-19 20:07:23.683442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.683582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.683593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.061 [2024-11-19 20:07:23.683604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:50.061 [2024-11-19 20:07:23.683612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.719385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.719433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.061 [2024-11-19 20:07:23.719451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.744 ms 00:17:50.061 [2024-11-19 20:07:23.719460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.719557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.719566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.061 [2024-11-19 20:07:23.719578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:50.061 [2024-11-19 20:07:23.719586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.720158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.720196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.061 [2024-11-19 20:07:23.720211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:50.061 [2024-11-19 20:07:23.720241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.720403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.720412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.061 [2024-11-19 20:07:23.720423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:50.061 [2024-11-19 20:07:23.720431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.061 [2024-11-19 20:07:23.738703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.061 [2024-11-19 20:07:23.738749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.061 [2024-11-19 20:07:23.738762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.246 ms 00:17:50.062 [2024-11-19 20:07:23.738771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.062 [2024-11-19 20:07:23.753647] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:50.062 [2024-11-19 20:07:23.753701] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:50.062 [2024-11-19 20:07:23.753717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.062 [2024-11-19 20:07:23.753726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:50.062 [2024-11-19 20:07:23.753737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.825 ms 00:17:50.062 [2024-11-19 20:07:23.753745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.062 [2024-11-19 20:07:23.779964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.062 [2024-11-19 20:07:23.780018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:50.062 [2024-11-19 20:07:23.780033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.112 ms 00:17:50.062 [2024-11-19 20:07:23.780042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.062 [2024-11-19 20:07:23.793476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.062 [2024-11-19 20:07:23.793524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:50.062 [2024-11-19 20:07:23.793557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.325 ms 00:17:50.062 [2024-11-19 20:07:23.793565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.062 [2024-11-19 20:07:23.806325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.062 [2024-11-19 20:07:23.806373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:50.062 [2024-11-19 20:07:23.806389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:17:50.062 [2024-11-19 20:07:23.806396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.062 [2024-11-19 20:07:23.807053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.062 [2024-11-19 20:07:23.807083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:50.062 [2024-11-19 20:07:23.807096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:50.062 [2024-11-19 20:07:23.807103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.886481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.886556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:50.323 [2024-11-19 20:07:23.886578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.345 ms 00:17:50.323 [2024-11-19 20:07:23.886587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.898128] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:50.323 [2024-11-19 20:07:23.917756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.917821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.323 [2024-11-19 20:07:23.917838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.048 ms 00:17:50.323 [2024-11-19 20:07:23.917849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.917948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.917962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:50.323 [2024-11-19 20:07:23.917973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:50.323 [2024-11-19 20:07:23.917983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.918041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.918054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:50.323 [2024-11-19 20:07:23.918062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:50.323 [2024-11-19 20:07:23.918073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.918101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.918112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:50.323 [2024-11-19 20:07:23.918120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:50.323 [2024-11-19 20:07:23.918133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.918171] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:50.323 [2024-11-19 20:07:23.918187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.918195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:50.323 [2024-11-19 20:07:23.918209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:50.323 [2024-11-19 20:07:23.918216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.944579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.944631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:50.323 [2024-11-19 20:07:23.944648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.300 ms 00:17:50.323 [2024-11-19 20:07:23.944656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.944799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.323 [2024-11-19 20:07:23.944811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:50.323 [2024-11-19 20:07:23.944823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:50.323 [2024-11-19 20:07:23.944834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.323 [2024-11-19 20:07:23.945980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.323 [2024-11-19 20:07:23.949753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.261 ms, result 0 00:17:50.323 [2024-11-19 20:07:23.952353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.323 Some configs were skipped because the RPC state that can call them passed over. 00:17:50.323 20:07:23 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:50.584 [2024-11-19 20:07:24.185104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.584 [2024-11-19 20:07:24.185177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:50.584 [2024-11-19 20:07:24.185191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:17:50.584 [2024-11-19 20:07:24.185202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.584 [2024-11-19 20:07:24.185255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.101 ms, result 0 00:17:50.584 true 00:17:50.584 20:07:24 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:50.846 [2024-11-19 20:07:24.389115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.846 [2024-11-19 20:07:24.389173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:50.846 [2024-11-19 20:07:24.389188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:17:50.846 [2024-11-19 20:07:24.389195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.846 [2024-11-19 20:07:24.389249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.835 ms, result 0 00:17:50.846 true 00:17:50.846 20:07:24 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74185 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74185 ']' 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74185 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74185 00:17:50.846 killing process with pid 74185 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74185' 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74185 00:17:50.846 20:07:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74185 00:17:51.417 [2024-11-19 20:07:25.110515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.110565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.417 [2024-11-19 20:07:25.110575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.417 [2024-11-19 20:07:25.110582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.110600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:51.417 [2024-11-19 20:07:25.112698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.112724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.417 [2024-11-19 20:07:25.112736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:17:51.417 [2024-11-19 20:07:25.112742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.112962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.112970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.417 [2024-11-19 20:07:25.112978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:51.417 [2024-11-19 20:07:25.112984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.116582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.116608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.417 [2024-11-19 20:07:25.116619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:17:51.417 [2024-11-19 20:07:25.116625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.121857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.121881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:51.417 [2024-11-19 20:07:25.121890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.201 ms 00:17:51.417 [2024-11-19 20:07:25.121896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.130315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.130339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.417 [2024-11-19 20:07:25.130349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.373 ms 00:17:51.417 [2024-11-19 20:07:25.130360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.137024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.137050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.417 [2024-11-19 20:07:25.137061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.631 ms 00:17:51.417 [2024-11-19 20:07:25.137068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.417 [2024-11-19 20:07:25.137173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.417 [2024-11-19 20:07:25.137181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.417 [2024-11-19 20:07:25.137189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:51.417 [2024-11-19 20:07:25.137195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.418 [2024-11-19 20:07:25.145746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.418 [2024-11-19 20:07:25.145770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:51.418 [2024-11-19 20:07:25.145778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.534 ms 00:17:51.418 [2024-11-19 20:07:25.145783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.418 [2024-11-19 20:07:25.153924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.418 [2024-11-19 20:07:25.153949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:51.418 [2024-11-19 20:07:25.153959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.109 ms 00:17:51.418 [2024-11-19 20:07:25.153965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.418 [2024-11-19 20:07:25.161607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.418 [2024-11-19 20:07:25.161633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.418 [2024-11-19 20:07:25.161643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.612 ms 00:17:51.418 [2024-11-19 20:07:25.161649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.418 [2024-11-19 20:07:25.169393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.418 [2024-11-19 20:07:25.169419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.418 [2024-11-19 20:07:25.169428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:17:51.418 [2024-11-19 20:07:25.169433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.418 [2024-11-19 20:07:25.169460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.418 [2024-11-19 20:07:25.169471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.418 [2024-11-19 20:07:25.169975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.169982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.169987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.169995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.419 [2024-11-19 20:07:25.170145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.419 [2024-11-19 20:07:25.170155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:17:51.419 [2024-11-19 20:07:25.170165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.419 [2024-11-19 20:07:25.170175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.419 [2024-11-19 20:07:25.170180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.419 [2024-11-19 20:07:25.170187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.419 [2024-11-19 20:07:25.170193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.419 [2024-11-19 20:07:25.170200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.419 [2024-11-19 20:07:25.170206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.419 [2024-11-19 20:07:25.170212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.419 [2024-11-19 20:07:25.170217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.419 [2024-11-19 20:07:25.170232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.419 [2024-11-19 20:07:25.170238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.419 [2024-11-19 20:07:25.170246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:17:51.419 [2024-11-19 20:07:25.170252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.419 [2024-11-19 20:07:25.179633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.419 [2024-11-19 20:07:25.179656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.419 [2024-11-19 20:07:25.179667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.361 ms 00:17:51.419 [2024-11-19 20:07:25.179673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.419 [2024-11-19 20:07:25.179957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.419 [2024-11-19 20:07:25.179969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.419 [2024-11-19 20:07:25.179978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:51.419 [2024-11-19 20:07:25.179985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.214894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.214923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.679 [2024-11-19 20:07:25.214933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.214939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.215877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.215902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.679 [2024-11-19 20:07:25.215911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.215919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.215956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.215963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.679 [2024-11-19 20:07:25.215972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.215977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.215992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.215998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.679 [2024-11-19 20:07:25.216005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.216010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.275653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.275683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.679 [2024-11-19 20:07:25.275693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.275699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.679 [2024-11-19 20:07:25.324092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.679 [2024-11-19 20:07:25.324172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.679 [2024-11-19 20:07:25.324214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.679 [2024-11-19 20:07:25.324313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:51.679 [2024-11-19 20:07:25.324357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.679 [2024-11-19 20:07:25.324407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.679 [2024-11-19 20:07:25.324453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.679 [2024-11-19 20:07:25.324460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.679 [2024-11-19 20:07:25.324465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.679 [2024-11-19 20:07:25.324567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 214.034 ms, result 0 00:17:52.246 20:07:25 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.246 [2024-11-19 20:07:25.899661] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:52.246 [2024-11-19 20:07:25.899954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74232 ] 00:17:52.505 [2024-11-19 20:07:26.056083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.505 [2024-11-19 20:07:26.132092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.766 [2024-11-19 20:07:26.337342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.766 [2024-11-19 20:07:26.337393] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.766 [2024-11-19 20:07:26.491307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.491345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:52.766 [2024-11-19 20:07:26.491355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.766 [2024-11-19 20:07:26.491361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.493415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.493445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.766 [2024-11-19 20:07:26.493453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:17:52.766 [2024-11-19 20:07:26.493459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.493516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:52.766 [2024-11-19 20:07:26.494033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:52.766 [2024-11-19 20:07:26.494057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.494063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.766 [2024-11-19 20:07:26.494070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:17:52.766 [2024-11-19 20:07:26.494076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.495067] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:52.766 [2024-11-19 20:07:26.504982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.505012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:52.766 [2024-11-19 20:07:26.505021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.916 ms 00:17:52.766 [2024-11-19 20:07:26.505028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.505096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.505105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:52.766 [2024-11-19 20:07:26.505112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:52.766 [2024-11-19 20:07:26.505118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.509576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.509599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.766 [2024-11-19 20:07:26.509606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.429 ms 00:17:52.766 [2024-11-19 20:07:26.509612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.509686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.509694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.766 [2024-11-19 20:07:26.509701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:52.766 [2024-11-19 20:07:26.509706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.509721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.509730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:52.766 [2024-11-19 20:07:26.509736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:52.766 [2024-11-19 20:07:26.509741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.509758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:52.766 [2024-11-19 20:07:26.512445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.512467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.766 [2024-11-19 20:07:26.512474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:17:52.766 [2024-11-19 20:07:26.512480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.512508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.512515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:52.766 [2024-11-19 20:07:26.512521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:52.766 [2024-11-19 20:07:26.512526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.512542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:52.766 [2024-11-19 20:07:26.512558] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:52.766 [2024-11-19 20:07:26.512585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:52.766 [2024-11-19 20:07:26.512596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:52.766 [2024-11-19 20:07:26.512676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:52.766 [2024-11-19 20:07:26.512684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:52.766 [2024-11-19 20:07:26.512692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:52.766 [2024-11-19 20:07:26.512700] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:52.766 [2024-11-19 20:07:26.512709] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:52.766 [2024-11-19 20:07:26.512715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:52.766 [2024-11-19 20:07:26.512721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:52.766 [2024-11-19 20:07:26.512726] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:52.766 [2024-11-19 20:07:26.512731] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:52.766 [2024-11-19 20:07:26.512737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.512743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:52.766 [2024-11-19 20:07:26.512749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:52.766 [2024-11-19 20:07:26.512754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.512820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.766 [2024-11-19 20:07:26.512827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:52.766 [2024-11-19 20:07:26.512835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:52.766 [2024-11-19 20:07:26.512841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.766 [2024-11-19 20:07:26.512915] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:52.766 [2024-11-19 20:07:26.512928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:52.766 [2024-11-19 20:07:26.512935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.766 [2024-11-19 20:07:26.512941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.766 [2024-11-19 20:07:26.512947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:52.766 [2024-11-19 20:07:26.512953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:52.766 [2024-11-19 20:07:26.512958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:52.766 [2024-11-19 20:07:26.512963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:52.766 [2024-11-19 20:07:26.512969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:52.766 [2024-11-19 20:07:26.512974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.766 [2024-11-19 20:07:26.512979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:52.766 [2024-11-19 20:07:26.512984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:52.766 [2024-11-19 20:07:26.512989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.766 [2024-11-19 20:07:26.512997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:52.767 [2024-11-19 20:07:26.513002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:52.767 [2024-11-19 20:07:26.513008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:52.767 [2024-11-19 20:07:26.513019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:52.767 [2024-11-19 20:07:26.513034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:52.767 [2024-11-19 20:07:26.513050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:52.767 [2024-11-19 20:07:26.513065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:52.767 [2024-11-19 20:07:26.513080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:52.767 [2024-11-19 20:07:26.513095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.767 [2024-11-19 20:07:26.513105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:52.767 [2024-11-19 20:07:26.513110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:52.767 [2024-11-19 20:07:26.513115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.767 [2024-11-19 20:07:26.513120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:52.767 [2024-11-19 20:07:26.513124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:52.767 [2024-11-19 20:07:26.513129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:52.767 [2024-11-19 20:07:26.513139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:52.767 [2024-11-19 20:07:26.513144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:52.767 [2024-11-19 20:07:26.513155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:52.767 [2024-11-19 20:07:26.513160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.767 [2024-11-19 20:07:26.513172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:52.767 [2024-11-19 20:07:26.513179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:52.767 [2024-11-19 20:07:26.513184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:52.767 [2024-11-19 20:07:26.513189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:52.767 [2024-11-19 20:07:26.513194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:52.767 [2024-11-19 20:07:26.513199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:52.767 [2024-11-19 20:07:26.513205] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:52.767 [2024-11-19 20:07:26.513212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:52.767 [2024-11-19 20:07:26.513242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:52.767 [2024-11-19 20:07:26.513248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:52.767 [2024-11-19 20:07:26.513253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:52.767 [2024-11-19 20:07:26.513259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:52.767 [2024-11-19 20:07:26.513264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:52.767 [2024-11-19 20:07:26.513270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:52.767 [2024-11-19 20:07:26.513276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:52.767 [2024-11-19 20:07:26.513282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:52.767 [2024-11-19 20:07:26.513287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:52.767 [2024-11-19 20:07:26.513315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:52.767 [2024-11-19 20:07:26.513321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:52.767 [2024-11-19 20:07:26.513332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:52.767 [2024-11-19 20:07:26.513338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:52.767 [2024-11-19 20:07:26.513343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:52.767 [2024-11-19 20:07:26.513349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-11-19 20:07:26.513354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:52.767 [2024-11-19 20:07:26.513362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:17:52.767 [2024-11-19 20:07:26.513367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.767 [2024-11-19 20:07:26.534120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-11-19 20:07:26.534149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.767 [2024-11-19 20:07:26.534157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.714 ms 00:17:52.767 [2024-11-19 20:07:26.534163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.767 [2024-11-19 20:07:26.534262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-11-19 20:07:26.534274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.767 [2024-11-19 20:07:26.534280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:52.767 [2024-11-19 20:07:26.534286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.571322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.571355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.026 [2024-11-19 20:07:26.571364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.020 ms 00:17:53.026 [2024-11-19 20:07:26.571373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.571432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.571441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.026 [2024-11-19 20:07:26.571448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:53.026 [2024-11-19 20:07:26.571453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.571741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.571762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.026 [2024-11-19 20:07:26.571769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:53.026 [2024-11-19 20:07:26.571777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.571884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.571898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.026 [2024-11-19 20:07:26.571905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:53.026 [2024-11-19 20:07:26.571910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.582672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.582699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.026 [2024-11-19 20:07:26.582706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.746 ms 00:17:53.026 [2024-11-19 20:07:26.582712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.592635] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:53.026 [2024-11-19 20:07:26.592663] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:53.026 [2024-11-19 20:07:26.592673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.592679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:53.026 [2024-11-19 20:07:26.592685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.877 ms 00:17:53.026 [2024-11-19 20:07:26.592691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.026 [2024-11-19 20:07:26.611494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.026 [2024-11-19 20:07:26.611529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:53.026 [2024-11-19 20:07:26.611538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.754 ms 00:17:53.027 [2024-11-19 20:07:26.611544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.620650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.620675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:53.027 [2024-11-19 20:07:26.620683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.051 ms 00:17:53.027 [2024-11-19 20:07:26.620688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.629688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.629712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:53.027 [2024-11-19 20:07:26.629719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.956 ms 00:17:53.027 [2024-11-19 20:07:26.629725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.630186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.630206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:53.027 [2024-11-19 20:07:26.630213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:17:53.027 [2024-11-19 20:07:26.630218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.674836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.674870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:53.027 [2024-11-19 20:07:26.674880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.589 ms 00:17:53.027 [2024-11-19 20:07:26.674886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.683270] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:53.027 [2024-11-19 20:07:26.695073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.695103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:53.027 [2024-11-19 20:07:26.695113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.120 ms 00:17:53.027 [2024-11-19 20:07:26.695119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.695193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.695201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:53.027 [2024-11-19 20:07:26.695208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:53.027 [2024-11-19 20:07:26.695214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.695263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.695271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.027 [2024-11-19 20:07:26.695278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:53.027 [2024-11-19 20:07:26.695285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.695307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.695315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.027 [2024-11-19 20:07:26.695321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.027 [2024-11-19 20:07:26.695327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.695349] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:53.027 [2024-11-19 20:07:26.695357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.695363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:53.027 [2024-11-19 20:07:26.695369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:53.027 [2024-11-19 20:07:26.695374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.713934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.713963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.027 [2024-11-19 20:07:26.713972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.545 ms 00:17:53.027 [2024-11-19 20:07:26.713978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.714050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.027 [2024-11-19 20:07:26.714059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.027 [2024-11-19 20:07:26.714066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:53.027 [2024-11-19 20:07:26.714072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.027 [2024-11-19 20:07:26.714676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:53.027 [2024-11-19 20:07:26.716923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.153 ms, result 0 00:17:53.027 [2024-11-19 20:07:26.717913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.027 [2024-11-19 20:07:26.732618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.411  [2024-11-19T20:07:28.778Z] Copying: 20/256 [MB] (20 MBps) [2024-11-19T20:07:30.168Z] Copying: 35/256 [MB] (15 MBps) [2024-11-19T20:07:31.107Z] Copying: 49/256 [MB] (13 MBps) [2024-11-19T20:07:32.048Z] Copying: 64/256 [MB] (14 MBps) [2024-11-19T20:07:32.988Z] Copying: 81/256 [MB] (17 MBps) [2024-11-19T20:07:33.927Z] Copying: 98/256 [MB] (16 MBps) [2024-11-19T20:07:34.870Z] Copying: 108/256 [MB] (10 MBps) [2024-11-19T20:07:35.812Z] Copying: 123/256 [MB] (14 MBps) [2024-11-19T20:07:37.193Z] Copying: 133/256 [MB] (10 MBps) [2024-11-19T20:07:38.134Z] Copying: 143/256 [MB] (10 MBps) [2024-11-19T20:07:39.075Z] Copying: 160/256 [MB] (16 MBps) [2024-11-19T20:07:40.027Z] Copying: 177/256 [MB] (17 MBps) [2024-11-19T20:07:40.969Z] Copying: 190/256 [MB] (12 MBps) [2024-11-19T20:07:41.936Z] Copying: 201/256 [MB] (11 MBps) [2024-11-19T20:07:42.943Z] Copying: 212/256 [MB] (10 MBps) [2024-11-19T20:07:43.887Z] Copying: 224/256 [MB] (12 MBps) [2024-11-19T20:07:44.830Z] Copying: 237/256 [MB] (12 MBps) [2024-11-19T20:07:45.400Z] Copying: 247/256 [MB] (10 MBps) [2024-11-19T20:07:45.400Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-19 20:07:45.363874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.606 [2024-11-19 20:07:45.376706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.606 [2024-11-19 20:07:45.376760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.606 [2024-11-19 20:07:45.376775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:11.606 [2024-11-19 20:07:45.376795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.606 [2024-11-19 20:07:45.376821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:11.606 [2024-11-19 20:07:45.379915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.606 [2024-11-19 20:07:45.379955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.606 [2024-11-19 20:07:45.379967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:18:11.606 [2024-11-19 20:07:45.379975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.606 [2024-11-19 20:07:45.380277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.606 [2024-11-19 20:07:45.380290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.606 [2024-11-19 20:07:45.380299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:11.606 [2024-11-19 20:07:45.380308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.606 [2024-11-19 20:07:45.384599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.606 [2024-11-19 20:07:45.384641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.606 [2024-11-19 20:07:45.384652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:18:11.606 [2024-11-19 20:07:45.384660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.606 [2024-11-19 20:07:45.391694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.606 [2024-11-19 20:07:45.391733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:11.606 [2024-11-19 20:07:45.391744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.010 ms 00:18:11.606 [2024-11-19 20:07:45.391752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.417056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.417104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.869 [2024-11-19 20:07:45.417117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.233 ms 00:18:11.869 [2024-11-19 20:07:45.417125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.432957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.433006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.869 [2024-11-19 20:07:45.433019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.778 ms 00:18:11.869 [2024-11-19 20:07:45.433031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.433189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.433202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.869 [2024-11-19 20:07:45.433213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:11.869 [2024-11-19 20:07:45.433242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.458583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.458627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:11.869 [2024-11-19 20:07:45.458639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.311 ms 00:18:11.869 [2024-11-19 20:07:45.458647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.483831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.483874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:11.869 [2024-11-19 20:07:45.483885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.136 ms 00:18:11.869 [2024-11-19 20:07:45.483893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.508425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.508471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.869 [2024-11-19 20:07:45.508485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.482 ms 00:18:11.869 [2024-11-19 20:07:45.508493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.532706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.869 [2024-11-19 20:07:45.532750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.869 [2024-11-19 20:07:45.532762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:18:11.869 [2024-11-19 20:07:45.532769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.869 [2024-11-19 20:07:45.532816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.869 [2024-11-19 20:07:45.532832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.532993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.869 [2024-11-19 20:07:45.533041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.870 [2024-11-19 20:07:45.533715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.870 [2024-11-19 20:07:45.533724] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d764a84c-1666-4f29-b824-623425701006 00:18:11.870 [2024-11-19 20:07:45.533735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.870 [2024-11-19 20:07:45.533743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.870 [2024-11-19 20:07:45.533752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.870 [2024-11-19 20:07:45.533760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.870 [2024-11-19 20:07:45.533770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.870 [2024-11-19 20:07:45.533779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.870 [2024-11-19 20:07:45.533787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.870 [2024-11-19 20:07:45.533793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.870 [2024-11-19 20:07:45.533801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.870 [2024-11-19 20:07:45.533808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.870 [2024-11-19 20:07:45.533820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.870 [2024-11-19 20:07:45.533830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:18:11.870 [2024-11-19 20:07:45.533839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.870 [2024-11-19 20:07:45.547412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.870 [2024-11-19 20:07:45.547456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.870 [2024-11-19 20:07:45.547468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.540 ms 00:18:11.870 [2024-11-19 20:07:45.547476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.871 [2024-11-19 20:07:45.547872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.871 [2024-11-19 20:07:45.547894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.871 [2024-11-19 20:07:45.547905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:11.871 [2024-11-19 20:07:45.547914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.871 [2024-11-19 20:07:45.586385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.871 [2024-11-19 20:07:45.586430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.871 [2024-11-19 20:07:45.586442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.871 [2024-11-19 20:07:45.586451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.871 [2024-11-19 20:07:45.586560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.871 [2024-11-19 20:07:45.586571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.871 [2024-11-19 20:07:45.586581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.871 [2024-11-19 20:07:45.586588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.871 [2024-11-19 20:07:45.586645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.871 [2024-11-19 20:07:45.586657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.871 [2024-11-19 20:07:45.586667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.871 [2024-11-19 20:07:45.586675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.871 [2024-11-19 20:07:45.586694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.871 [2024-11-19 20:07:45.586705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.871 [2024-11-19 20:07:45.586714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.871 [2024-11-19 20:07:45.586721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.671045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.671097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.133 [2024-11-19 20:07:45.671111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.671120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.133 [2024-11-19 20:07:45.740118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.133 [2024-11-19 20:07:45.740243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.133 [2024-11-19 20:07:45.740308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.133 [2024-11-19 20:07:45.740437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.133 [2024-11-19 20:07:45.740501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.133 [2024-11-19 20:07:45.740576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.133 [2024-11-19 20:07:45.740585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.133 [2024-11-19 20:07:45.740634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.133 [2024-11-19 20:07:45.740646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.134 [2024-11-19 20:07:45.740658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.134 [2024-11-19 20:07:45.740666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.134 [2024-11-19 20:07:45.740824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.113 ms, result 0 00:18:12.706 00:18:12.706 00:18:12.706 20:07:46 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:13.278 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:13.278 20:07:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:13.278 20:07:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:13.278 20:07:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:13.278 20:07:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.278 20:07:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:13.537 20:07:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:13.537 20:07:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74185 00:18:13.537 20:07:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74185 ']' 00:18:13.537 20:07:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74185 00:18:13.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74185) - No such process 00:18:13.537 Process with pid 74185 is not found 00:18:13.537 20:07:47 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74185 is not found' 00:18:13.537 00:18:13.537 real 1m22.967s 00:18:13.537 user 1m39.419s 00:18:13.537 sys 0m10.539s 00:18:13.537 20:07:47 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.537 ************************************ 00:18:13.537 END TEST ftl_trim 00:18:13.537 ************************************ 00:18:13.537 20:07:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 20:07:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:13.537 20:07:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:13.537 20:07:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.537 20:07:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 ************************************ 00:18:13.537 START TEST ftl_restore 00:18:13.537 ************************************ 00:18:13.537 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:13.537 * Looking for test storage... 00:18:13.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:13.537 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.537 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.537 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.799 20:07:47 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.799 --rc genhtml_branch_coverage=1 00:18:13.799 --rc genhtml_function_coverage=1 00:18:13.799 --rc genhtml_legend=1 00:18:13.799 --rc geninfo_all_blocks=1 00:18:13.799 --rc geninfo_unexecuted_blocks=1 00:18:13.799 00:18:13.799 ' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.799 --rc genhtml_branch_coverage=1 00:18:13.799 --rc genhtml_function_coverage=1 00:18:13.799 --rc genhtml_legend=1 00:18:13.799 --rc geninfo_all_blocks=1 00:18:13.799 --rc geninfo_unexecuted_blocks=1 00:18:13.799 00:18:13.799 ' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.799 --rc genhtml_branch_coverage=1 00:18:13.799 --rc genhtml_function_coverage=1 00:18:13.799 --rc genhtml_legend=1 00:18:13.799 --rc geninfo_all_blocks=1 00:18:13.799 --rc geninfo_unexecuted_blocks=1 00:18:13.799 00:18:13.799 ' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.799 --rc genhtml_branch_coverage=1 00:18:13.799 --rc genhtml_function_coverage=1 00:18:13.799 --rc genhtml_legend=1 00:18:13.799 --rc geninfo_all_blocks=1 00:18:13.799 --rc geninfo_unexecuted_blocks=1 00:18:13.799 00:18:13.799 ' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RWiRzTIPXf 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74524 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74524 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74524 ']' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.799 20:07:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:13.799 20:07:47 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:13.799 [2024-11-19 20:07:47.461251] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:13.799 [2024-11-19 20:07:47.461898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74524 ] 00:18:14.061 [2024-11-19 20:07:47.621517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.061 [2024-11-19 20:07:47.723013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.633 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.633 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:14.633 20:07:48 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:15.206 20:07:48 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:15.206 20:07:48 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:15.206 20:07:48 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:15.206 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:15.206 { 00:18:15.206 "name": "nvme0n1", 00:18:15.206 "aliases": [ 00:18:15.206 "21fa4c22-c52a-4d0b-82a1-b7eb9b2f319b" 00:18:15.206 ], 00:18:15.207 "product_name": "NVMe disk", 00:18:15.207 "block_size": 4096, 00:18:15.207 "num_blocks": 1310720, 00:18:15.207 "uuid": "21fa4c22-c52a-4d0b-82a1-b7eb9b2f319b", 00:18:15.207 "numa_id": -1, 00:18:15.207 "assigned_rate_limits": { 00:18:15.207 "rw_ios_per_sec": 0, 00:18:15.207 "rw_mbytes_per_sec": 0, 00:18:15.207 "r_mbytes_per_sec": 0, 00:18:15.207 "w_mbytes_per_sec": 0 00:18:15.207 }, 00:18:15.207 "claimed": true, 00:18:15.207 "claim_type": "read_many_write_one", 00:18:15.207 "zoned": false, 00:18:15.207 "supported_io_types": { 00:18:15.207 "read": true, 00:18:15.207 "write": true, 00:18:15.207 "unmap": true, 00:18:15.207 "flush": true, 00:18:15.207 "reset": true, 00:18:15.207 "nvme_admin": true, 00:18:15.207 "nvme_io": true, 00:18:15.207 "nvme_io_md": false, 00:18:15.207 "write_zeroes": true, 00:18:15.207 "zcopy": false, 00:18:15.207 "get_zone_info": false, 00:18:15.207 "zone_management": false, 00:18:15.207 "zone_append": false, 00:18:15.207 "compare": true, 00:18:15.207 "compare_and_write": false, 00:18:15.207 "abort": true, 00:18:15.207 "seek_hole": false, 00:18:15.207 "seek_data": false, 00:18:15.207 "copy": true, 00:18:15.207 "nvme_iov_md": false 00:18:15.207 }, 00:18:15.207 "driver_specific": { 00:18:15.207 "nvme": [ 00:18:15.207 { 00:18:15.207 "pci_address": "0000:00:11.0", 00:18:15.207 "trid": { 00:18:15.207 "trtype": "PCIe", 00:18:15.207 "traddr": "0000:00:11.0" 00:18:15.207 }, 00:18:15.207 "ctrlr_data": { 00:18:15.207 "cntlid": 0, 00:18:15.207 "vendor_id": "0x1b36", 00:18:15.207 "model_number": "QEMU NVMe Ctrl", 00:18:15.207 "serial_number": "12341", 00:18:15.207 "firmware_revision": "8.0.0", 00:18:15.207 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:15.207 "oacs": { 00:18:15.207 "security": 0, 00:18:15.207 "format": 1, 00:18:15.207 "firmware": 0, 00:18:15.207 "ns_manage": 1 00:18:15.207 }, 00:18:15.207 "multi_ctrlr": false, 00:18:15.207 "ana_reporting": false 00:18:15.207 }, 00:18:15.207 "vs": { 00:18:15.207 "nvme_version": "1.4" 00:18:15.207 }, 00:18:15.207 "ns_data": { 00:18:15.207 "id": 1, 00:18:15.207 "can_share": false 00:18:15.207 } 00:18:15.207 } 00:18:15.207 ], 00:18:15.207 "mp_policy": "active_passive" 00:18:15.207 } 00:18:15.207 } 00:18:15.207 ]' 00:18:15.207 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:15.207 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:15.207 20:07:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:15.468 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:15.468 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:15.468 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ec585924-d21f-4961-a770-3027dd16f3b3 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:15.468 20:07:49 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec585924-d21f-4961-a770-3027dd16f3b3 00:18:15.729 20:07:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:15.990 20:07:49 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8fb33f84-d81b-41bd-8ca2-39e322b7681d 00:18:15.990 20:07:49 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fb33f84-d81b-41bd-8ca2-39e322b7681d 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:16.252 20:07:49 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.252 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.252 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:16.252 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:16.252 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:16.252 20:07:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:16.513 { 00:18:16.513 "name": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:16.513 "aliases": [ 00:18:16.513 "lvs/nvme0n1p0" 00:18:16.513 ], 00:18:16.513 "product_name": "Logical Volume", 00:18:16.513 "block_size": 4096, 00:18:16.513 "num_blocks": 26476544, 00:18:16.513 "uuid": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:16.513 "assigned_rate_limits": { 00:18:16.513 "rw_ios_per_sec": 0, 00:18:16.513 "rw_mbytes_per_sec": 0, 00:18:16.513 "r_mbytes_per_sec": 0, 00:18:16.513 "w_mbytes_per_sec": 0 00:18:16.513 }, 00:18:16.513 "claimed": false, 00:18:16.513 "zoned": false, 00:18:16.513 "supported_io_types": { 00:18:16.513 "read": true, 00:18:16.513 "write": true, 00:18:16.513 "unmap": true, 00:18:16.513 "flush": false, 00:18:16.513 "reset": true, 00:18:16.513 "nvme_admin": false, 00:18:16.513 "nvme_io": false, 00:18:16.513 "nvme_io_md": false, 00:18:16.513 "write_zeroes": true, 00:18:16.513 "zcopy": false, 00:18:16.513 "get_zone_info": false, 00:18:16.513 "zone_management": false, 00:18:16.513 "zone_append": false, 00:18:16.513 "compare": false, 00:18:16.513 "compare_and_write": false, 00:18:16.513 "abort": false, 00:18:16.513 "seek_hole": true, 00:18:16.513 "seek_data": true, 00:18:16.513 "copy": false, 00:18:16.513 "nvme_iov_md": false 00:18:16.513 }, 00:18:16.513 "driver_specific": { 00:18:16.513 "lvol": { 00:18:16.513 "lvol_store_uuid": "8fb33f84-d81b-41bd-8ca2-39e322b7681d", 00:18:16.513 "base_bdev": "nvme0n1", 00:18:16.513 "thin_provision": true, 00:18:16.513 "num_allocated_clusters": 0, 00:18:16.513 "snapshot": false, 00:18:16.513 "clone": false, 00:18:16.513 "esnap_clone": false 00:18:16.513 } 00:18:16.513 } 00:18:16.513 } 00:18:16.513 ]' 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:16.513 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:16.513 20:07:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:16.513 20:07:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:16.513 20:07:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:16.773 20:07:50 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:16.773 20:07:50 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:16.773 20:07:50 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.773 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:16.773 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:16.773 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:16.773 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:16.773 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:17.031 { 00:18:17.031 "name": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:17.031 "aliases": [ 00:18:17.031 "lvs/nvme0n1p0" 00:18:17.031 ], 00:18:17.031 "product_name": "Logical Volume", 00:18:17.031 "block_size": 4096, 00:18:17.031 "num_blocks": 26476544, 00:18:17.031 "uuid": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:17.031 "assigned_rate_limits": { 00:18:17.031 "rw_ios_per_sec": 0, 00:18:17.031 "rw_mbytes_per_sec": 0, 00:18:17.031 "r_mbytes_per_sec": 0, 00:18:17.031 "w_mbytes_per_sec": 0 00:18:17.031 }, 00:18:17.031 "claimed": false, 00:18:17.031 "zoned": false, 00:18:17.031 "supported_io_types": { 00:18:17.031 "read": true, 00:18:17.031 "write": true, 00:18:17.031 "unmap": true, 00:18:17.031 "flush": false, 00:18:17.031 "reset": true, 00:18:17.031 "nvme_admin": false, 00:18:17.031 "nvme_io": false, 00:18:17.031 "nvme_io_md": false, 00:18:17.031 "write_zeroes": true, 00:18:17.031 "zcopy": false, 00:18:17.031 "get_zone_info": false, 00:18:17.031 "zone_management": false, 00:18:17.031 "zone_append": false, 00:18:17.031 "compare": false, 00:18:17.031 "compare_and_write": false, 00:18:17.031 "abort": false, 00:18:17.031 "seek_hole": true, 00:18:17.031 "seek_data": true, 00:18:17.031 "copy": false, 00:18:17.031 "nvme_iov_md": false 00:18:17.031 }, 00:18:17.031 "driver_specific": { 00:18:17.031 "lvol": { 00:18:17.031 "lvol_store_uuid": "8fb33f84-d81b-41bd-8ca2-39e322b7681d", 00:18:17.031 "base_bdev": "nvme0n1", 00:18:17.031 "thin_provision": true, 00:18:17.031 "num_allocated_clusters": 0, 00:18:17.031 "snapshot": false, 00:18:17.031 "clone": false, 00:18:17.031 "esnap_clone": false 00:18:17.031 } 00:18:17.031 } 00:18:17.031 } 00:18:17.031 ]' 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:17.031 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:17.031 20:07:50 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:17.031 20:07:50 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:17.290 20:07:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:17.290 20:07:50 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:17.290 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:17.290 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:17.290 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:17.290 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:17.290 20:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54245a68-8158-4711-976a-4b3e8cdcb07e 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:17.549 { 00:18:17.549 "name": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:17.549 "aliases": [ 00:18:17.549 "lvs/nvme0n1p0" 00:18:17.549 ], 00:18:17.549 "product_name": "Logical Volume", 00:18:17.549 "block_size": 4096, 00:18:17.549 "num_blocks": 26476544, 00:18:17.549 "uuid": "54245a68-8158-4711-976a-4b3e8cdcb07e", 00:18:17.549 "assigned_rate_limits": { 00:18:17.549 "rw_ios_per_sec": 0, 00:18:17.549 "rw_mbytes_per_sec": 0, 00:18:17.549 "r_mbytes_per_sec": 0, 00:18:17.549 "w_mbytes_per_sec": 0 00:18:17.549 }, 00:18:17.549 "claimed": false, 00:18:17.549 "zoned": false, 00:18:17.549 "supported_io_types": { 00:18:17.549 "read": true, 00:18:17.549 "write": true, 00:18:17.549 "unmap": true, 00:18:17.549 "flush": false, 00:18:17.549 "reset": true, 00:18:17.549 "nvme_admin": false, 00:18:17.549 "nvme_io": false, 00:18:17.549 "nvme_io_md": false, 00:18:17.549 "write_zeroes": true, 00:18:17.549 "zcopy": false, 00:18:17.549 "get_zone_info": false, 00:18:17.549 "zone_management": false, 00:18:17.549 "zone_append": false, 00:18:17.549 "compare": false, 00:18:17.549 "compare_and_write": false, 00:18:17.549 "abort": false, 00:18:17.549 "seek_hole": true, 00:18:17.549 "seek_data": true, 00:18:17.549 "copy": false, 00:18:17.549 "nvme_iov_md": false 00:18:17.549 }, 00:18:17.549 "driver_specific": { 00:18:17.549 "lvol": { 00:18:17.549 "lvol_store_uuid": "8fb33f84-d81b-41bd-8ca2-39e322b7681d", 00:18:17.549 "base_bdev": "nvme0n1", 00:18:17.549 "thin_provision": true, 00:18:17.549 "num_allocated_clusters": 0, 00:18:17.549 "snapshot": false, 00:18:17.549 "clone": false, 00:18:17.549 "esnap_clone": false 00:18:17.549 } 00:18:17.549 } 00:18:17.549 } 00:18:17.549 ]' 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:17.549 20:07:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 54245a68-8158-4711-976a-4b3e8cdcb07e --l2p_dram_limit 10' 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:17.549 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:17.549 20:07:51 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 54245a68-8158-4711-976a-4b3e8cdcb07e --l2p_dram_limit 10 -c nvc0n1p0 00:18:17.810 [2024-11-19 20:07:51.356162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.356198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:17.810 [2024-11-19 20:07:51.356210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.810 [2024-11-19 20:07:51.356216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.356264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.356272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.810 [2024-11-19 20:07:51.356280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:17.810 [2024-11-19 20:07:51.356286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.356304] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:17.810 [2024-11-19 20:07:51.356874] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:17.810 [2024-11-19 20:07:51.356896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.356902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.810 [2024-11-19 20:07:51.356910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:17.810 [2024-11-19 20:07:51.356917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.356992] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a40704e4-a446-4db0-82b2-c45640f57cbf 00:18:17.810 [2024-11-19 20:07:51.357919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.357942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:17.810 [2024-11-19 20:07:51.357950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:17.810 [2024-11-19 20:07:51.357957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.362527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.362553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.810 [2024-11-19 20:07:51.362562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:18:17.810 [2024-11-19 20:07:51.362569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.362632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.362641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.810 [2024-11-19 20:07:51.362647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:17.810 [2024-11-19 20:07:51.362657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.362686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.362695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:17.810 [2024-11-19 20:07:51.362703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:17.810 [2024-11-19 20:07:51.362712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.362727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.810 [2024-11-19 20:07:51.365542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.365565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.810 [2024-11-19 20:07:51.365575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:18:17.810 [2024-11-19 20:07:51.365580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.365605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.365627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:17.810 [2024-11-19 20:07:51.365636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.810 [2024-11-19 20:07:51.365641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.365655] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:17.810 [2024-11-19 20:07:51.365759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:17.810 [2024-11-19 20:07:51.365775] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:17.810 [2024-11-19 20:07:51.365784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:17.810 [2024-11-19 20:07:51.365793] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:17.810 [2024-11-19 20:07:51.365800] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:17.810 [2024-11-19 20:07:51.365807] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:17.810 [2024-11-19 20:07:51.365813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:17.810 [2024-11-19 20:07:51.365822] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:17.810 [2024-11-19 20:07:51.365828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:17.810 [2024-11-19 20:07:51.365835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.365841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:17.810 [2024-11-19 20:07:51.365848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:18:17.810 [2024-11-19 20:07:51.365859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.365923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.810 [2024-11-19 20:07:51.365930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:17.810 [2024-11-19 20:07:51.365938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:17.810 [2024-11-19 20:07:51.365944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.810 [2024-11-19 20:07:51.366022] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:17.810 [2024-11-19 20:07:51.366030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:17.810 [2024-11-19 20:07:51.366037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.810 [2024-11-19 20:07:51.366043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.810 [2024-11-19 20:07:51.366051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:17.810 [2024-11-19 20:07:51.366056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:17.810 [2024-11-19 20:07:51.366063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:17.810 [2024-11-19 20:07:51.366069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:17.810 [2024-11-19 20:07:51.366077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:17.810 [2024-11-19 20:07:51.366082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.810 [2024-11-19 20:07:51.366089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:17.810 [2024-11-19 20:07:51.366096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:17.810 [2024-11-19 20:07:51.366103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.810 [2024-11-19 20:07:51.366108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:17.810 [2024-11-19 20:07:51.366115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:17.811 [2024-11-19 20:07:51.366120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:17.811 [2024-11-19 20:07:51.366136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:17.811 [2024-11-19 20:07:51.366154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:17.811 [2024-11-19 20:07:51.366171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:17.811 [2024-11-19 20:07:51.366189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:17.811 [2024-11-19 20:07:51.366207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:17.811 [2024-11-19 20:07:51.366236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.811 [2024-11-19 20:07:51.366248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:17.811 [2024-11-19 20:07:51.366253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:17.811 [2024-11-19 20:07:51.366259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.811 [2024-11-19 20:07:51.366265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:17.811 [2024-11-19 20:07:51.366271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:17.811 [2024-11-19 20:07:51.366276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:17.811 [2024-11-19 20:07:51.366287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:17.811 [2024-11-19 20:07:51.366293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366298] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:17.811 [2024-11-19 20:07:51.366306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:17.811 [2024-11-19 20:07:51.366311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.811 [2024-11-19 20:07:51.366327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:17.811 [2024-11-19 20:07:51.366335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:17.811 [2024-11-19 20:07:51.366340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:17.811 [2024-11-19 20:07:51.366347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:17.811 [2024-11-19 20:07:51.366352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:17.811 [2024-11-19 20:07:51.366359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:17.811 [2024-11-19 20:07:51.366368] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:17.811 [2024-11-19 20:07:51.366376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:17.811 [2024-11-19 20:07:51.366392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:17.811 [2024-11-19 20:07:51.366399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:17.811 [2024-11-19 20:07:51.366407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:17.811 [2024-11-19 20:07:51.366413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:17.811 [2024-11-19 20:07:51.366421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:17.811 [2024-11-19 20:07:51.366427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:17.811 [2024-11-19 20:07:51.366433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:17.811 [2024-11-19 20:07:51.366439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:17.811 [2024-11-19 20:07:51.366448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:17.811 [2024-11-19 20:07:51.366480] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:17.811 [2024-11-19 20:07:51.366487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:17.811 [2024-11-19 20:07:51.366500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:17.811 [2024-11-19 20:07:51.366505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:17.811 [2024-11-19 20:07:51.366512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:17.811 [2024-11-19 20:07:51.366517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.811 [2024-11-19 20:07:51.366524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:17.811 [2024-11-19 20:07:51.366530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:17.811 [2024-11-19 20:07:51.366536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.811 [2024-11-19 20:07:51.366564] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:17.811 [2024-11-19 20:07:51.366574] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:21.109 [2024-11-19 20:07:54.774768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.774849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:21.109 [2024-11-19 20:07:54.774868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3408.186 ms 00:18:21.109 [2024-11-19 20:07:54.774880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.806682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.806754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.109 [2024-11-19 20:07:54.806771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.552 ms 00:18:21.109 [2024-11-19 20:07:54.806784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.806954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.806970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:21.109 [2024-11-19 20:07:54.806979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:21.109 [2024-11-19 20:07:54.806993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.842798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.842855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.109 [2024-11-19 20:07:54.842868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.762 ms 00:18:21.109 [2024-11-19 20:07:54.842879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.842916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.842932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.109 [2024-11-19 20:07:54.842941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.109 [2024-11-19 20:07:54.842952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.843574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.843615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.109 [2024-11-19 20:07:54.843627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:18:21.109 [2024-11-19 20:07:54.843638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.843755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.843769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.109 [2024-11-19 20:07:54.843782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:21.109 [2024-11-19 20:07:54.843795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.861131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.861186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.109 [2024-11-19 20:07:54.861197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.316 ms 00:18:21.109 [2024-11-19 20:07:54.861208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.109 [2024-11-19 20:07:54.874533] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:21.109 [2024-11-19 20:07:54.878336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.109 [2024-11-19 20:07:54.878379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:21.109 [2024-11-19 20:07:54.878393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.020 ms 00:18:21.109 [2024-11-19 20:07:54.878401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.371 [2024-11-19 20:07:54.981036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.371 [2024-11-19 20:07:54.981108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:21.371 [2024-11-19 20:07:54.981129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.595 ms 00:18:21.371 [2024-11-19 20:07:54.981138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.371 [2024-11-19 20:07:54.981371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.371 [2024-11-19 20:07:54.981390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:21.371 [2024-11-19 20:07:54.981406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:21.371 [2024-11-19 20:07:54.981415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.371 [2024-11-19 20:07:55.007764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.371 [2024-11-19 20:07:55.007820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:21.371 [2024-11-19 20:07:55.007836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.270 ms 00:18:21.371 [2024-11-19 20:07:55.007845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.371 [2024-11-19 20:07:55.033315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.371 [2024-11-19 20:07:55.033376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:21.372 [2024-11-19 20:07:55.033391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.410 ms 00:18:21.372 [2024-11-19 20:07:55.033399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.372 [2024-11-19 20:07:55.034047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.372 [2024-11-19 20:07:55.034070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:21.372 [2024-11-19 20:07:55.034083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:18:21.372 [2024-11-19 20:07:55.034093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.372 [2024-11-19 20:07:55.119083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.372 [2024-11-19 20:07:55.119134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:21.372 [2024-11-19 20:07:55.119155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.938 ms 00:18:21.372 [2024-11-19 20:07:55.119164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.372 [2024-11-19 20:07:55.147210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.372 [2024-11-19 20:07:55.147272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:21.372 [2024-11-19 20:07:55.147287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.933 ms 00:18:21.372 [2024-11-19 20:07:55.147296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.634 [2024-11-19 20:07:55.173289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.634 [2024-11-19 20:07:55.173337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:21.634 [2024-11-19 20:07:55.173351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.936 ms 00:18:21.634 [2024-11-19 20:07:55.173360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.634 [2024-11-19 20:07:55.199646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.634 [2024-11-19 20:07:55.199698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:21.634 [2024-11-19 20:07:55.199713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.232 ms 00:18:21.634 [2024-11-19 20:07:55.199722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.634 [2024-11-19 20:07:55.199779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.634 [2024-11-19 20:07:55.199789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:21.634 [2024-11-19 20:07:55.199804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:21.634 [2024-11-19 20:07:55.199813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.634 [2024-11-19 20:07:55.199909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.634 [2024-11-19 20:07:55.199922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:21.634 [2024-11-19 20:07:55.199937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:21.634 [2024-11-19 20:07:55.199945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.634 [2024-11-19 20:07:55.201218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3844.536 ms, result 0 00:18:21.634 { 00:18:21.634 "name": "ftl0", 00:18:21.634 "uuid": "a40704e4-a446-4db0-82b2-c45640f57cbf" 00:18:21.634 } 00:18:21.634 20:07:55 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:21.634 20:07:55 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:21.928 20:07:55 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:21.928 20:07:55 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:21.928 [2024-11-19 20:07:55.660533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.928 [2024-11-19 20:07:55.660605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:21.928 [2024-11-19 20:07:55.660620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:21.928 [2024-11-19 20:07:55.660638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.928 [2024-11-19 20:07:55.660663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.190 [2024-11-19 20:07:55.663793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.190 [2024-11-19 20:07:55.663841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:22.190 [2024-11-19 20:07:55.663856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:18:22.190 [2024-11-19 20:07:55.663865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.190 [2024-11-19 20:07:55.664146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.190 [2024-11-19 20:07:55.664159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:22.190 [2024-11-19 20:07:55.664175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:22.190 [2024-11-19 20:07:55.664183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.667445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.667472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:22.191 [2024-11-19 20:07:55.667485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:18:22.191 [2024-11-19 20:07:55.667493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.673795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.673839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:22.191 [2024-11-19 20:07:55.673856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.279 ms 00:18:22.191 [2024-11-19 20:07:55.673864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.700206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.700264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:22.191 [2024-11-19 20:07:55.700280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.253 ms 00:18:22.191 [2024-11-19 20:07:55.700287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.717700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.717755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:22.191 [2024-11-19 20:07:55.717772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.355 ms 00:18:22.191 [2024-11-19 20:07:55.717781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.717956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.717971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:22.191 [2024-11-19 20:07:55.717985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:22.191 [2024-11-19 20:07:55.717993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.744287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.744336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:22.191 [2024-11-19 20:07:55.744350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.271 ms 00:18:22.191 [2024-11-19 20:07:55.744359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.770251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.770298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:22.191 [2024-11-19 20:07:55.770313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.838 ms 00:18:22.191 [2024-11-19 20:07:55.770321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.795616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.795666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:22.191 [2024-11-19 20:07:55.795680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.237 ms 00:18:22.191 [2024-11-19 20:07:55.795688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.820755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.191 [2024-11-19 20:07:55.820803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:22.191 [2024-11-19 20:07:55.820818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.936 ms 00:18:22.191 [2024-11-19 20:07:55.820825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.191 [2024-11-19 20:07:55.820876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:22.191 [2024-11-19 20:07:55.820892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.820998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:22.191 [2024-11-19 20:07:55.821442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:22.192 [2024-11-19 20:07:55.821867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:22.192 [2024-11-19 20:07:55.821881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a40704e4-a446-4db0-82b2-c45640f57cbf 00:18:22.192 [2024-11-19 20:07:55.821889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:22.192 [2024-11-19 20:07:55.821902] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:22.192 [2024-11-19 20:07:55.821910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:22.192 [2024-11-19 20:07:55.821923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:22.192 [2024-11-19 20:07:55.821930] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:22.192 [2024-11-19 20:07:55.821942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:22.192 [2024-11-19 20:07:55.821951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:22.192 [2024-11-19 20:07:55.821959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:22.192 [2024-11-19 20:07:55.821966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:22.192 [2024-11-19 20:07:55.821977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.192 [2024-11-19 20:07:55.821985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:22.192 [2024-11-19 20:07:55.821996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:18:22.192 [2024-11-19 20:07:55.822006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.835778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.192 [2024-11-19 20:07:55.835821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:22.192 [2024-11-19 20:07:55.835835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.723 ms 00:18:22.192 [2024-11-19 20:07:55.835843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.836296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.192 [2024-11-19 20:07:55.836312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:22.192 [2024-11-19 20:07:55.836325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:18:22.192 [2024-11-19 20:07:55.836337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.882664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.192 [2024-11-19 20:07:55.882714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.192 [2024-11-19 20:07:55.882729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.192 [2024-11-19 20:07:55.882737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.882813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.192 [2024-11-19 20:07:55.882822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.192 [2024-11-19 20:07:55.882833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.192 [2024-11-19 20:07:55.882844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.882937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.192 [2024-11-19 20:07:55.882950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.192 [2024-11-19 20:07:55.882961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.192 [2024-11-19 20:07:55.882969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.882993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.192 [2024-11-19 20:07:55.883003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.192 [2024-11-19 20:07:55.883014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.192 [2024-11-19 20:07:55.883021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.192 [2024-11-19 20:07:55.968261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.192 [2024-11-19 20:07:55.968321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.192 [2024-11-19 20:07:55.968338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.192 [2024-11-19 20:07:55.968347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.454 [2024-11-19 20:07:56.038485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.038498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.454 [2024-11-19 20:07:56.038639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.038647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.454 [2024-11-19 20:07:56.038725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.038733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.454 [2024-11-19 20:07:56.038860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.038870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:22.454 [2024-11-19 20:07:56.038933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.038941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.038985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.038999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.454 [2024-11-19 20:07:56.039009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.039017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.039071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.454 [2024-11-19 20:07:56.039083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.454 [2024-11-19 20:07:56.039094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.454 [2024-11-19 20:07:56.039103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.454 [2024-11-19 20:07:56.039288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.678 ms, result 0 00:18:22.454 true 00:18:22.454 20:07:56 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74524 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74524 ']' 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74524 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74524 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.454 killing process with pid 74524 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74524' 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74524 00:18:22.454 20:07:56 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74524 00:18:25.760 20:07:59 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:31.055 262144+0 records in 00:18:31.055 262144+0 records out 00:18:31.055 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.34173 s, 247 MB/s 00:18:31.055 20:08:03 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:32.443 20:08:06 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:32.443 [2024-11-19 20:08:06.201990] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:32.443 [2024-11-19 20:08:06.202137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74768 ] 00:18:32.704 [2024-11-19 20:08:06.364403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.704 [2024-11-19 20:08:06.480747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.278 [2024-11-19 20:08:06.771603] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.278 [2024-11-19 20:08:06.771682] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.278 [2024-11-19 20:08:06.933393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.933453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.278 [2024-11-19 20:08:06.933475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.278 [2024-11-19 20:08:06.933484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.278 [2024-11-19 20:08:06.933541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.933551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.278 [2024-11-19 20:08:06.933563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:33.278 [2024-11-19 20:08:06.933572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.278 [2024-11-19 20:08:06.933592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.278 [2024-11-19 20:08:06.934351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.278 [2024-11-19 20:08:06.934390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.934400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.278 [2024-11-19 20:08:06.934410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:18:33.278 [2024-11-19 20:08:06.934418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.278 [2024-11-19 20:08:06.936073] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:33.278 [2024-11-19 20:08:06.950508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.950558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:33.278 [2024-11-19 20:08:06.950571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:18:33.278 [2024-11-19 20:08:06.950580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.278 [2024-11-19 20:08:06.950661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.950671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:33.278 [2024-11-19 20:08:06.950680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:33.278 [2024-11-19 20:08:06.950688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.278 [2024-11-19 20:08:06.958700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.278 [2024-11-19 20:08:06.958742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.278 [2024-11-19 20:08:06.958752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.933 ms 00:18:33.278 [2024-11-19 20:08:06.958761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.958848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.958858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.279 [2024-11-19 20:08:06.958867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:33.279 [2024-11-19 20:08:06.958875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.958921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.958934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.279 [2024-11-19 20:08:06.958943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:33.279 [2024-11-19 20:08:06.958952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.958977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.279 [2024-11-19 20:08:06.963055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.963097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.279 [2024-11-19 20:08:06.963108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:18:33.279 [2024-11-19 20:08:06.963119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.963154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.963163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.279 [2024-11-19 20:08:06.963172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:33.279 [2024-11-19 20:08:06.963180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.963246] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:33.279 [2024-11-19 20:08:06.963271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:33.279 [2024-11-19 20:08:06.963311] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:33.279 [2024-11-19 20:08:06.963331] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:33.279 [2024-11-19 20:08:06.963443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:33.279 [2024-11-19 20:08:06.963456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.279 [2024-11-19 20:08:06.963467] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:33.279 [2024-11-19 20:08:06.963482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963501] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:33.279 [2024-11-19 20:08:06.963508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.279 [2024-11-19 20:08:06.963517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:33.279 [2024-11-19 20:08:06.963526] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:33.279 [2024-11-19 20:08:06.963539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.963547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.279 [2024-11-19 20:08:06.963556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:33.279 [2024-11-19 20:08:06.963565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.963652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.279 [2024-11-19 20:08:06.963663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.279 [2024-11-19 20:08:06.963671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:33.279 [2024-11-19 20:08:06.963679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.279 [2024-11-19 20:08:06.963781] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.279 [2024-11-19 20:08:06.963807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.279 [2024-11-19 20:08:06.963816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.279 [2024-11-19 20:08:06.963841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.279 [2024-11-19 20:08:06.963865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.279 [2024-11-19 20:08:06.963883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.279 [2024-11-19 20:08:06.963892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:33.279 [2024-11-19 20:08:06.963901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.279 [2024-11-19 20:08:06.963909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.279 [2024-11-19 20:08:06.963917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:33.279 [2024-11-19 20:08:06.963930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.279 [2024-11-19 20:08:06.963948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.279 [2024-11-19 20:08:06.963972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:33.279 [2024-11-19 20:08:06.963980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.279 [2024-11-19 20:08:06.963987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.279 [2024-11-19 20:08:06.963994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.279 [2024-11-19 20:08:06.964008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.279 [2024-11-19 20:08:06.964017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.279 [2024-11-19 20:08:06.964032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.279 [2024-11-19 20:08:06.964040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.279 [2024-11-19 20:08:06.964055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.279 [2024-11-19 20:08:06.964062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.279 [2024-11-19 20:08:06.964076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.279 [2024-11-19 20:08:06.964084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:33.279 [2024-11-19 20:08:06.964091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.279 [2024-11-19 20:08:06.964098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:33.279 [2024-11-19 20:08:06.964105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:33.279 [2024-11-19 20:08:06.964111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:33.279 [2024-11-19 20:08:06.964124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:33.279 [2024-11-19 20:08:06.964131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964137] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.279 [2024-11-19 20:08:06.964148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.279 [2024-11-19 20:08:06.964156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.279 [2024-11-19 20:08:06.964165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.279 [2024-11-19 20:08:06.964173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.279 [2024-11-19 20:08:06.964180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.279 [2024-11-19 20:08:06.964187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.279 [2024-11-19 20:08:06.964194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.279 [2024-11-19 20:08:06.964201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.279 [2024-11-19 20:08:06.964208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.279 [2024-11-19 20:08:06.964231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.279 [2024-11-19 20:08:06.964242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.279 [2024-11-19 20:08:06.964251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:33.279 [2024-11-19 20:08:06.964260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:33.279 [2024-11-19 20:08:06.964269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:33.279 [2024-11-19 20:08:06.964276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:33.279 [2024-11-19 20:08:06.964284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:33.279 [2024-11-19 20:08:06.964292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:33.279 [2024-11-19 20:08:06.964300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:33.279 [2024-11-19 20:08:06.964307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:33.280 [2024-11-19 20:08:06.964313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:33.280 [2024-11-19 20:08:06.964323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:33.280 [2024-11-19 20:08:06.964360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.280 [2024-11-19 20:08:06.964373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.280 [2024-11-19 20:08:06.964391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.280 [2024-11-19 20:08:06.964399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.280 [2024-11-19 20:08:06.964408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.280 [2024-11-19 20:08:06.964417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:06.964427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.280 [2024-11-19 20:08:06.964438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:18:33.280 [2024-11-19 20:08:06.964446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:06.996386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:06.996434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.280 [2024-11-19 20:08:06.996446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.895 ms 00:18:33.280 [2024-11-19 20:08:06.996455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:06.996552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:06.996561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.280 [2024-11-19 20:08:06.996570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:33.280 [2024-11-19 20:08:06.996578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:07.043618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:07.043675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.280 [2024-11-19 20:08:07.043689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.982 ms 00:18:33.280 [2024-11-19 20:08:07.043698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:07.043747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:07.043757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.280 [2024-11-19 20:08:07.043767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.280 [2024-11-19 20:08:07.043778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:07.044368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:07.044403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.280 [2024-11-19 20:08:07.044414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:18:33.280 [2024-11-19 20:08:07.044423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:07.044577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:07.044590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.280 [2024-11-19 20:08:07.044600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:33.280 [2024-11-19 20:08:07.044616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.280 [2024-11-19 20:08:07.060373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.280 [2024-11-19 20:08:07.060416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.280 [2024-11-19 20:08:07.060431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.731 ms 00:18:33.280 [2024-11-19 20:08:07.060440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.542 [2024-11-19 20:08:07.074946] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:33.542 [2024-11-19 20:08:07.074998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:33.542 [2024-11-19 20:08:07.075013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.542 [2024-11-19 20:08:07.075022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:33.542 [2024-11-19 20:08:07.075032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:18:33.542 [2024-11-19 20:08:07.075040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.542 [2024-11-19 20:08:07.100758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.542 [2024-11-19 20:08:07.100823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:33.542 [2024-11-19 20:08:07.100844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.661 ms 00:18:33.542 [2024-11-19 20:08:07.100852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.542 [2024-11-19 20:08:07.113980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.542 [2024-11-19 20:08:07.114038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:33.542 [2024-11-19 20:08:07.114050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.094 ms 00:18:33.542 [2024-11-19 20:08:07.114058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.542 [2024-11-19 20:08:07.126522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.542 [2024-11-19 20:08:07.126568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:33.542 [2024-11-19 20:08:07.126580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.417 ms 00:18:33.542 [2024-11-19 20:08:07.126587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.542 [2024-11-19 20:08:07.127256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.542 [2024-11-19 20:08:07.127289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.542 [2024-11-19 20:08:07.127301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:33.543 [2024-11-19 20:08:07.127309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.192314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.192377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:33.543 [2024-11-19 20:08:07.192393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.982 ms 00:18:33.543 [2024-11-19 20:08:07.192409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.203656] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:33.543 [2024-11-19 20:08:07.206716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.206761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.543 [2024-11-19 20:08:07.206774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.241 ms 00:18:33.543 [2024-11-19 20:08:07.206782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.206868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.206880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:33.543 [2024-11-19 20:08:07.206890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:33.543 [2024-11-19 20:08:07.206900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.206973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.206987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.543 [2024-11-19 20:08:07.206997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:33.543 [2024-11-19 20:08:07.207006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.207028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.207038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.543 [2024-11-19 20:08:07.207047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.543 [2024-11-19 20:08:07.207056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.207091] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:33.543 [2024-11-19 20:08:07.207103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.207114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:33.543 [2024-11-19 20:08:07.207122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:33.543 [2024-11-19 20:08:07.207131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.233151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.233200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.543 [2024-11-19 20:08:07.233214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.000 ms 00:18:33.543 [2024-11-19 20:08:07.233236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.233331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.543 [2024-11-19 20:08:07.233343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.543 [2024-11-19 20:08:07.233353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:33.543 [2024-11-19 20:08:07.233362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.543 [2024-11-19 20:08:07.234670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.761 ms, result 0 00:18:34.488  [2024-11-19T20:08:09.666Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-19T20:08:10.609Z] Copying: 34/1024 [MB] (23 MBps) [2024-11-19T20:08:11.549Z] Copying: 50/1024 [MB] (16 MBps) [2024-11-19T20:08:12.487Z] Copying: 69/1024 [MB] (18 MBps) [2024-11-19T20:08:13.485Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-19T20:08:14.428Z] Copying: 124/1024 [MB] (31 MBps) [2024-11-19T20:08:15.372Z] Copying: 140/1024 [MB] (16 MBps) [2024-11-19T20:08:16.314Z] Copying: 158/1024 [MB] (18 MBps) [2024-11-19T20:08:17.259Z] Copying: 174/1024 [MB] (15 MBps) [2024-11-19T20:08:18.644Z] Copying: 191/1024 [MB] (17 MBps) [2024-11-19T20:08:19.586Z] Copying: 211/1024 [MB] (20 MBps) [2024-11-19T20:08:20.531Z] Copying: 224/1024 [MB] (13 MBps) [2024-11-19T20:08:21.475Z] Copying: 237/1024 [MB] (12 MBps) [2024-11-19T20:08:22.417Z] Copying: 254/1024 [MB] (16 MBps) [2024-11-19T20:08:23.359Z] Copying: 276/1024 [MB] (21 MBps) [2024-11-19T20:08:24.300Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-19T20:08:25.686Z] Copying: 315/1024 [MB] (21 MBps) [2024-11-19T20:08:26.259Z] Copying: 330/1024 [MB] (14 MBps) [2024-11-19T20:08:27.639Z] Copying: 342/1024 [MB] (12 MBps) [2024-11-19T20:08:28.577Z] Copying: 355/1024 [MB] (13 MBps) [2024-11-19T20:08:29.522Z] Copying: 403/1024 [MB] (47 MBps) [2024-11-19T20:08:30.467Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-19T20:08:31.411Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-19T20:08:32.346Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-19T20:08:33.281Z] Copying: 471/1024 [MB] (37 MBps) [2024-11-19T20:08:34.665Z] Copying: 511/1024 [MB] (40 MBps) [2024-11-19T20:08:35.608Z] Copying: 536/1024 [MB] (24 MBps) [2024-11-19T20:08:36.555Z] Copying: 549/1024 [MB] (13 MBps) [2024-11-19T20:08:37.498Z] Copying: 560/1024 [MB] (10 MBps) [2024-11-19T20:08:38.437Z] Copying: 571/1024 [MB] (10 MBps) [2024-11-19T20:08:39.381Z] Copying: 586/1024 [MB] (15 MBps) [2024-11-19T20:08:40.324Z] Copying: 603/1024 [MB] (16 MBps) [2024-11-19T20:08:41.262Z] Copying: 615/1024 [MB] (12 MBps) [2024-11-19T20:08:42.698Z] Copying: 635/1024 [MB] (20 MBps) [2024-11-19T20:08:43.264Z] Copying: 655/1024 [MB] (19 MBps) [2024-11-19T20:08:44.636Z] Copying: 669/1024 [MB] (14 MBps) [2024-11-19T20:08:45.576Z] Copying: 692/1024 [MB] (22 MBps) [2024-11-19T20:08:46.513Z] Copying: 717/1024 [MB] (25 MBps) [2024-11-19T20:08:47.443Z] Copying: 736/1024 [MB] (18 MBps) [2024-11-19T20:08:48.375Z] Copying: 773/1024 [MB] (36 MBps) [2024-11-19T20:08:49.314Z] Copying: 797/1024 [MB] (23 MBps) [2024-11-19T20:08:50.249Z] Copying: 819/1024 [MB] (22 MBps) [2024-11-19T20:08:51.621Z] Copying: 849/1024 [MB] (29 MBps) [2024-11-19T20:08:52.563Z] Copying: 871/1024 [MB] (22 MBps) [2024-11-19T20:08:53.497Z] Copying: 889/1024 [MB] (17 MBps) [2024-11-19T20:08:54.436Z] Copying: 915/1024 [MB] (26 MBps) [2024-11-19T20:08:55.380Z] Copying: 934/1024 [MB] (18 MBps) [2024-11-19T20:08:56.324Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-19T20:08:57.269Z] Copying: 958/1024 [MB] (13 MBps) [2024-11-19T20:08:58.662Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-19T20:08:59.607Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-19T20:09:00.551Z] Copying: 990/1024 [MB] (10 MBps) [2024-11-19T20:09:01.496Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-19T20:09:02.442Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-19T20:09:02.706Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-19T20:09:02.706Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 20:09:02.470345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.470405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.912 [2024-11-19 20:09:02.470421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:28.912 [2024-11-19 20:09:02.470430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.470451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.912 [2024-11-19 20:09:02.473413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.473457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.912 [2024-11-19 20:09:02.473469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:19:28.912 [2024-11-19 20:09:02.473477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.476532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.476579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.912 [2024-11-19 20:09:02.476590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:19:28.912 [2024-11-19 20:09:02.476599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.495244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.495295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.912 [2024-11-19 20:09:02.495307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.628 ms 00:19:28.912 [2024-11-19 20:09:02.495315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.501423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.501474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.912 [2024-11-19 20:09:02.501486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.067 ms 00:19:28.912 [2024-11-19 20:09:02.501494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.527959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.528007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.912 [2024-11-19 20:09:02.528019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:19:28.912 [2024-11-19 20:09:02.528026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.544213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.544276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.912 [2024-11-19 20:09:02.544289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.141 ms 00:19:28.912 [2024-11-19 20:09:02.544297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.544417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.544428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.912 [2024-11-19 20:09:02.544447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:28.912 [2024-11-19 20:09:02.544459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.570065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.570112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.912 [2024-11-19 20:09:02.570123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.591 ms 00:19:28.912 [2024-11-19 20:09:02.570131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.595420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.595466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.912 [2024-11-19 20:09:02.595489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.243 ms 00:19:28.912 [2024-11-19 20:09:02.595496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.620393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.620439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.912 [2024-11-19 20:09:02.620451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.853 ms 00:19:28.912 [2024-11-19 20:09:02.620458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.645441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.912 [2024-11-19 20:09:02.645484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.912 [2024-11-19 20:09:02.645495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.911 ms 00:19:28.912 [2024-11-19 20:09:02.645502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.912 [2024-11-19 20:09:02.645545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.912 [2024-11-19 20:09:02.645561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.912 [2024-11-19 20:09:02.645769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.645996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.913 [2024-11-19 20:09:02.646407] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.913 [2024-11-19 20:09:02.646422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a40704e4-a446-4db0-82b2-c45640f57cbf 00:19:28.913 [2024-11-19 20:09:02.646431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.913 [2024-11-19 20:09:02.646442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.913 [2024-11-19 20:09:02.646450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.913 [2024-11-19 20:09:02.646458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.913 [2024-11-19 20:09:02.646465] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.913 [2024-11-19 20:09:02.646472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.913 [2024-11-19 20:09:02.646480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.913 [2024-11-19 20:09:02.646493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.913 [2024-11-19 20:09:02.646499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.913 [2024-11-19 20:09:02.646506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.913 [2024-11-19 20:09:02.646515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.913 [2024-11-19 20:09:02.646524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:19:28.913 [2024-11-19 20:09:02.646531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.913 [2024-11-19 20:09:02.659935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.913 [2024-11-19 20:09:02.659977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.913 [2024-11-19 20:09:02.659989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.385 ms 00:19:28.913 [2024-11-19 20:09:02.659997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.913 [2024-11-19 20:09:02.660420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.913 [2024-11-19 20:09:02.660480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.914 [2024-11-19 20:09:02.660488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:19:28.914 [2024-11-19 20:09:02.660496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.914 [2024-11-19 20:09:02.696658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.914 [2024-11-19 20:09:02.696706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.914 [2024-11-19 20:09:02.696717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.914 [2024-11-19 20:09:02.696726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.914 [2024-11-19 20:09:02.696785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.914 [2024-11-19 20:09:02.696794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.914 [2024-11-19 20:09:02.696803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.914 [2024-11-19 20:09:02.696812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.914 [2024-11-19 20:09:02.696880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.914 [2024-11-19 20:09:02.696891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.914 [2024-11-19 20:09:02.696900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.914 [2024-11-19 20:09:02.696908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.914 [2024-11-19 20:09:02.696924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.914 [2024-11-19 20:09:02.696931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.914 [2024-11-19 20:09:02.696940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.914 [2024-11-19 20:09:02.696947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.780171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.175 [2024-11-19 20:09:02.780250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.175 [2024-11-19 20:09:02.780263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.175 [2024-11-19 20:09:02.780272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.848452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.175 [2024-11-19 20:09:02.848510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.175 [2024-11-19 20:09:02.848522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.175 [2024-11-19 20:09:02.848530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.848606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.175 [2024-11-19 20:09:02.848623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.175 [2024-11-19 20:09:02.848633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.175 [2024-11-19 20:09:02.848641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.848678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.175 [2024-11-19 20:09:02.848688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.175 [2024-11-19 20:09:02.848697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.175 [2024-11-19 20:09:02.848706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.848804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.175 [2024-11-19 20:09:02.848818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.175 [2024-11-19 20:09:02.848827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.175 [2024-11-19 20:09:02.848836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.175 [2024-11-19 20:09:02.848872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.176 [2024-11-19 20:09:02.848882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.176 [2024-11-19 20:09:02.848891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.176 [2024-11-19 20:09:02.848900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.176 [2024-11-19 20:09:02.848939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.176 [2024-11-19 20:09:02.848950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.176 [2024-11-19 20:09:02.848962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.176 [2024-11-19 20:09:02.848972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.176 [2024-11-19 20:09:02.849017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.176 [2024-11-19 20:09:02.849028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.176 [2024-11-19 20:09:02.849037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.176 [2024-11-19 20:09:02.849045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.176 [2024-11-19 20:09:02.849176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.793 ms, result 0 00:19:30.561 00:19:30.561 00:19:30.561 20:09:03 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:30.561 [2024-11-19 20:09:04.038734] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:30.561 [2024-11-19 20:09:04.038893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75364 ] 00:19:30.561 [2024-11-19 20:09:04.199826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.561 [2024-11-19 20:09:04.315900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.822 [2024-11-19 20:09:04.604711] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.822 [2024-11-19 20:09:04.604793] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.084 [2024-11-19 20:09:04.765711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.765776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.084 [2024-11-19 20:09:04.765795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:31.084 [2024-11-19 20:09:04.765804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.765872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.765884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.084 [2024-11-19 20:09:04.765895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:31.084 [2024-11-19 20:09:04.765904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.765925] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.084 [2024-11-19 20:09:04.766629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.084 [2024-11-19 20:09:04.766664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.766672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.084 [2024-11-19 20:09:04.766682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:19:31.084 [2024-11-19 20:09:04.766689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.768346] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:31.084 [2024-11-19 20:09:04.782169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.782231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:31.084 [2024-11-19 20:09:04.782245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:19:31.084 [2024-11-19 20:09:04.782254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.782330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.782341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:31.084 [2024-11-19 20:09:04.782350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:31.084 [2024-11-19 20:09:04.782358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.790275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.790317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.084 [2024-11-19 20:09:04.790328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.843 ms 00:19:31.084 [2024-11-19 20:09:04.790336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.790419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.790429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.084 [2024-11-19 20:09:04.790438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:31.084 [2024-11-19 20:09:04.790446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.790488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.790499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.084 [2024-11-19 20:09:04.790507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:31.084 [2024-11-19 20:09:04.790515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.790538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.084 [2024-11-19 20:09:04.794469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.794506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.084 [2024-11-19 20:09:04.794517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.936 ms 00:19:31.084 [2024-11-19 20:09:04.794527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.794562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.794571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.084 [2024-11-19 20:09:04.794579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:31.084 [2024-11-19 20:09:04.794587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.794636] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:31.084 [2024-11-19 20:09:04.794659] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:31.084 [2024-11-19 20:09:04.794698] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:31.084 [2024-11-19 20:09:04.794718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:31.084 [2024-11-19 20:09:04.794822] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.084 [2024-11-19 20:09:04.794833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.084 [2024-11-19 20:09:04.794844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.084 [2024-11-19 20:09:04.794854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.084 [2024-11-19 20:09:04.794864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.084 [2024-11-19 20:09:04.794872] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:31.084 [2024-11-19 20:09:04.794880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.084 [2024-11-19 20:09:04.794888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.084 [2024-11-19 20:09:04.794896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.084 [2024-11-19 20:09:04.794907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.794914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.084 [2024-11-19 20:09:04.794922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:31.084 [2024-11-19 20:09:04.794929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.795012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.084 [2024-11-19 20:09:04.795021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.084 [2024-11-19 20:09:04.795029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:31.084 [2024-11-19 20:09:04.795036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.084 [2024-11-19 20:09:04.795140] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.084 [2024-11-19 20:09:04.795152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.084 [2024-11-19 20:09:04.795160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.084 [2024-11-19 20:09:04.795168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.084 [2024-11-19 20:09:04.795176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.084 [2024-11-19 20:09:04.795183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.084 [2024-11-19 20:09:04.795191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:31.084 [2024-11-19 20:09:04.795199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.084 [2024-11-19 20:09:04.795207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:31.084 [2024-11-19 20:09:04.795214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.084 [2024-11-19 20:09:04.795238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.084 [2024-11-19 20:09:04.795248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:31.084 [2024-11-19 20:09:04.795255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.084 [2024-11-19 20:09:04.795262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.084 [2024-11-19 20:09:04.795269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:31.084 [2024-11-19 20:09:04.795283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.084 [2024-11-19 20:09:04.795290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.084 [2024-11-19 20:09:04.795297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.085 [2024-11-19 20:09:04.795319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.085 [2024-11-19 20:09:04.795339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.085 [2024-11-19 20:09:04.795359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.085 [2024-11-19 20:09:04.795379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.085 [2024-11-19 20:09:04.795399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.085 [2024-11-19 20:09:04.795413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.085 [2024-11-19 20:09:04.795420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:31.085 [2024-11-19 20:09:04.795426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.085 [2024-11-19 20:09:04.795433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.085 [2024-11-19 20:09:04.795440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:31.085 [2024-11-19 20:09:04.795447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.085 [2024-11-19 20:09:04.795460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:31.085 [2024-11-19 20:09:04.795466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795476] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.085 [2024-11-19 20:09:04.795484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.085 [2024-11-19 20:09:04.795492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.085 [2024-11-19 20:09:04.795509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.085 [2024-11-19 20:09:04.795517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.085 [2024-11-19 20:09:04.795523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.085 [2024-11-19 20:09:04.795530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.085 [2024-11-19 20:09:04.795537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.085 [2024-11-19 20:09:04.795543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.085 [2024-11-19 20:09:04.795552] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.085 [2024-11-19 20:09:04.795562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:31.085 [2024-11-19 20:09:04.795578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:31.085 [2024-11-19 20:09:04.795585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:31.085 [2024-11-19 20:09:04.795593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:31.085 [2024-11-19 20:09:04.795599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:31.085 [2024-11-19 20:09:04.795607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:31.085 [2024-11-19 20:09:04.795614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:31.085 [2024-11-19 20:09:04.795621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:31.085 [2024-11-19 20:09:04.795628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:31.085 [2024-11-19 20:09:04.795636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:31.085 [2024-11-19 20:09:04.795671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.085 [2024-11-19 20:09:04.795682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.085 [2024-11-19 20:09:04.795699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.085 [2024-11-19 20:09:04.795706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.085 [2024-11-19 20:09:04.795714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.085 [2024-11-19 20:09:04.795722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.085 [2024-11-19 20:09:04.795730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.085 [2024-11-19 20:09:04.795738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:31.085 [2024-11-19 20:09:04.795745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.085 [2024-11-19 20:09:04.827307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.085 [2024-11-19 20:09:04.827361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.085 [2024-11-19 20:09:04.827374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.519 ms 00:19:31.085 [2024-11-19 20:09:04.827382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.085 [2024-11-19 20:09:04.827477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.085 [2024-11-19 20:09:04.827486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.085 [2024-11-19 20:09:04.827495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:31.085 [2024-11-19 20:09:04.827503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.347 [2024-11-19 20:09:04.875273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.347 [2024-11-19 20:09:04.875332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.347 [2024-11-19 20:09:04.875345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.711 ms 00:19:31.347 [2024-11-19 20:09:04.875354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.347 [2024-11-19 20:09:04.875401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.347 [2024-11-19 20:09:04.875411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.347 [2024-11-19 20:09:04.875421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.347 [2024-11-19 20:09:04.875433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.347 [2024-11-19 20:09:04.876030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.347 [2024-11-19 20:09:04.876073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.347 [2024-11-19 20:09:04.876084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:19:31.347 [2024-11-19 20:09:04.876092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.347 [2024-11-19 20:09:04.876267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.347 [2024-11-19 20:09:04.876279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.347 [2024-11-19 20:09:04.876288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:31.348 [2024-11-19 20:09:04.876301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.891756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.891804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.348 [2024-11-19 20:09:04.891818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.435 ms 00:19:31.348 [2024-11-19 20:09:04.891826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.906198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:31.348 [2024-11-19 20:09:04.906256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:31.348 [2024-11-19 20:09:04.906269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.906278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:31.348 [2024-11-19 20:09:04.906287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.335 ms 00:19:31.348 [2024-11-19 20:09:04.906295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.932808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.932880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:31.348 [2024-11-19 20:09:04.932892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.463 ms 00:19:31.348 [2024-11-19 20:09:04.932900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.945714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.945765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:31.348 [2024-11-19 20:09:04.945776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:19:31.348 [2024-11-19 20:09:04.945784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.958187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.958247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:31.348 [2024-11-19 20:09:04.958259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:19:31.348 [2024-11-19 20:09:04.958267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:04.958902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:04.958933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.348 [2024-11-19 20:09:04.958943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:31.348 [2024-11-19 20:09:04.958953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.022964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.023031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:31.348 [2024-11-19 20:09:05.023053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.991 ms 00:19:31.348 [2024-11-19 20:09:05.023063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.034148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:31.348 [2024-11-19 20:09:05.037115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.037157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.348 [2024-11-19 20:09:05.037169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:19:31.348 [2024-11-19 20:09:05.037177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.037277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.037290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:31.348 [2024-11-19 20:09:05.037300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:31.348 [2024-11-19 20:09:05.037312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.037385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.037396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.348 [2024-11-19 20:09:05.037406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:31.348 [2024-11-19 20:09:05.037413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.037434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.037443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.348 [2024-11-19 20:09:05.037452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.348 [2024-11-19 20:09:05.037460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.037497] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:31.348 [2024-11-19 20:09:05.037510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.037519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:31.348 [2024-11-19 20:09:05.037527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:31.348 [2024-11-19 20:09:05.037536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.062967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.063014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.348 [2024-11-19 20:09:05.063028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.413 ms 00:19:31.348 [2024-11-19 20:09:05.063041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.063127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.348 [2024-11-19 20:09:05.063137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.348 [2024-11-19 20:09:05.063147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:31.348 [2024-11-19 20:09:05.063156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.348 [2024-11-19 20:09:05.064416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.188 ms, result 0 00:19:32.733  [2024-11-19T20:09:07.474Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T20:09:08.419Z] Copying: 27/1024 [MB] (11 MBps) [2024-11-19T20:09:09.364Z] Copying: 38/1024 [MB] (10 MBps) [2024-11-19T20:09:10.304Z] Copying: 48/1024 [MB] (10 MBps) [2024-11-19T20:09:11.298Z] Copying: 64/1024 [MB] (16 MBps) [2024-11-19T20:09:12.687Z] Copying: 81/1024 [MB] (16 MBps) [2024-11-19T20:09:13.260Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-19T20:09:14.646Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-19T20:09:15.587Z] Copying: 122/1024 [MB] (17 MBps) [2024-11-19T20:09:16.531Z] Copying: 137/1024 [MB] (14 MBps) [2024-11-19T20:09:17.474Z] Copying: 160/1024 [MB] (23 MBps) [2024-11-19T20:09:18.418Z] Copying: 178/1024 [MB] (17 MBps) [2024-11-19T20:09:19.370Z] Copying: 200/1024 [MB] (21 MBps) [2024-11-19T20:09:20.313Z] Copying: 221/1024 [MB] (21 MBps) [2024-11-19T20:09:21.259Z] Copying: 237/1024 [MB] (15 MBps) [2024-11-19T20:09:22.648Z] Copying: 254/1024 [MB] (17 MBps) [2024-11-19T20:09:23.593Z] Copying: 272/1024 [MB] (18 MBps) [2024-11-19T20:09:24.539Z] Copying: 290/1024 [MB] (17 MBps) [2024-11-19T20:09:25.486Z] Copying: 312/1024 [MB] (21 MBps) [2024-11-19T20:09:26.431Z] Copying: 329/1024 [MB] (16 MBps) [2024-11-19T20:09:27.374Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-19T20:09:28.319Z] Copying: 350/1024 [MB] (10 MBps) [2024-11-19T20:09:29.274Z] Copying: 368/1024 [MB] (17 MBps) [2024-11-19T20:09:30.658Z] Copying: 391/1024 [MB] (22 MBps) [2024-11-19T20:09:31.600Z] Copying: 410/1024 [MB] (19 MBps) [2024-11-19T20:09:32.542Z] Copying: 428/1024 [MB] (17 MBps) [2024-11-19T20:09:33.483Z] Copying: 448/1024 [MB] (20 MBps) [2024-11-19T20:09:34.425Z] Copying: 463/1024 [MB] (15 MBps) [2024-11-19T20:09:35.369Z] Copying: 477/1024 [MB] (13 MBps) [2024-11-19T20:09:36.312Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-19T20:09:37.254Z] Copying: 499/1024 [MB] (11 MBps) [2024-11-19T20:09:38.642Z] Copying: 522/1024 [MB] (23 MBps) [2024-11-19T20:09:39.586Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-19T20:09:40.590Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-19T20:09:41.534Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-19T20:09:42.479Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-19T20:09:43.424Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-19T20:09:44.368Z] Copying: 594/1024 [MB] (15 MBps) [2024-11-19T20:09:45.312Z] Copying: 609/1024 [MB] (15 MBps) [2024-11-19T20:09:46.255Z] Copying: 623/1024 [MB] (13 MBps) [2024-11-19T20:09:47.641Z] Copying: 636/1024 [MB] (12 MBps) [2024-11-19T20:09:48.585Z] Copying: 651/1024 [MB] (15 MBps) [2024-11-19T20:09:49.528Z] Copying: 665/1024 [MB] (14 MBps) [2024-11-19T20:09:50.472Z] Copying: 684/1024 [MB] (18 MBps) [2024-11-19T20:09:51.415Z] Copying: 703/1024 [MB] (18 MBps) [2024-11-19T20:09:52.357Z] Copying: 718/1024 [MB] (15 MBps) [2024-11-19T20:09:53.297Z] Copying: 732/1024 [MB] (13 MBps) [2024-11-19T20:09:54.684Z] Copying: 742/1024 [MB] (10 MBps) [2024-11-19T20:09:55.258Z] Copying: 757/1024 [MB] (15 MBps) [2024-11-19T20:09:56.645Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-19T20:09:57.584Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-19T20:09:58.528Z] Copying: 798/1024 [MB] (20 MBps) [2024-11-19T20:09:59.471Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-19T20:10:00.412Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-19T20:10:01.355Z] Copying: 840/1024 [MB] (20 MBps) [2024-11-19T20:10:02.298Z] Copying: 855/1024 [MB] (14 MBps) [2024-11-19T20:10:03.680Z] Copying: 881/1024 [MB] (26 MBps) [2024-11-19T20:10:04.252Z] Copying: 918/1024 [MB] (36 MBps) [2024-11-19T20:10:05.636Z] Copying: 937/1024 [MB] (18 MBps) [2024-11-19T20:10:06.577Z] Copying: 956/1024 [MB] (19 MBps) [2024-11-19T20:10:07.520Z] Copying: 977/1024 [MB] (20 MBps) [2024-11-19T20:10:08.464Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-19T20:10:09.482Z] Copying: 1002/1024 [MB] (13 MBps) [2024-11-19T20:10:09.482Z] Copying: 1021/1024 [MB] (19 MBps) [2024-11-19T20:10:09.743Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 20:10:09.694008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.949 [2024-11-19 20:10:09.694093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.949 [2024-11-19 20:10:09.694109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.949 [2024-11-19 20:10:09.694119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.949 [2024-11-19 20:10:09.694145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.949 [2024-11-19 20:10:09.697830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.949 [2024-11-19 20:10:09.697883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.949 [2024-11-19 20:10:09.697904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.667 ms 00:20:35.949 [2024-11-19 20:10:09.697914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.949 [2024-11-19 20:10:09.698177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.949 [2024-11-19 20:10:09.698190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.949 [2024-11-19 20:10:09.698200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:35.949 [2024-11-19 20:10:09.698208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.949 [2024-11-19 20:10:09.702543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.949 [2024-11-19 20:10:09.702576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.949 [2024-11-19 20:10:09.702586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:20:35.949 [2024-11-19 20:10:09.702595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.949 [2024-11-19 20:10:09.708742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.949 [2024-11-19 20:10:09.708785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:35.950 [2024-11-19 20:10:09.708797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:20:35.950 [2024-11-19 20:10:09.708806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-11-19 20:10:09.737216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-11-19 20:10:09.737276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.950 [2024-11-19 20:10:09.737290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.336 ms 00:20:35.950 [2024-11-19 20:10:09.737298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.754334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.754385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:36.212 [2024-11-19 20:10:09.754400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.981 ms 00:20:36.212 [2024-11-19 20:10:09.754410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.754575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.754597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:36.212 [2024-11-19 20:10:09.754608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:36.212 [2024-11-19 20:10:09.754616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.781990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.782041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:36.212 [2024-11-19 20:10:09.782054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.357 ms 00:20:36.212 [2024-11-19 20:10:09.782063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.807746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.807808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:36.212 [2024-11-19 20:10:09.807820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.632 ms 00:20:36.212 [2024-11-19 20:10:09.807828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.833461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.833511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:36.212 [2024-11-19 20:10:09.833523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.584 ms 00:20:36.212 [2024-11-19 20:10:09.833531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.858861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.212 [2024-11-19 20:10:09.858908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:36.212 [2024-11-19 20:10:09.858921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.250 ms 00:20:36.212 [2024-11-19 20:10:09.858928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.212 [2024-11-19 20:10:09.858976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:36.212 [2024-11-19 20:10:09.858992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:36.212 [2024-11-19 20:10:09.859158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:36.213 [2024-11-19 20:10:09.859825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:36.213 [2024-11-19 20:10:09.859837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a40704e4-a446-4db0-82b2-c45640f57cbf 00:20:36.213 [2024-11-19 20:10:09.859846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:36.213 [2024-11-19 20:10:09.859854] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:36.213 [2024-11-19 20:10:09.859862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:36.213 [2024-11-19 20:10:09.859870] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:36.213 [2024-11-19 20:10:09.859878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:36.213 [2024-11-19 20:10:09.859886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:36.213 [2024-11-19 20:10:09.859901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:36.213 [2024-11-19 20:10:09.859909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:36.213 [2024-11-19 20:10:09.859916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:36.213 [2024-11-19 20:10:09.859924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.214 [2024-11-19 20:10:09.859932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:36.214 [2024-11-19 20:10:09.859941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:20:36.214 [2024-11-19 20:10:09.859948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.873714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.214 [2024-11-19 20:10:09.873759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:36.214 [2024-11-19 20:10:09.873772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.743 ms 00:20:36.214 [2024-11-19 20:10:09.873780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.874206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.214 [2024-11-19 20:10:09.874241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:36.214 [2024-11-19 20:10:09.874252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:20:36.214 [2024-11-19 20:10:09.874268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.911036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.214 [2024-11-19 20:10:09.911088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.214 [2024-11-19 20:10:09.911101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.214 [2024-11-19 20:10:09.911110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.911179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.214 [2024-11-19 20:10:09.911189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.214 [2024-11-19 20:10:09.911199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.214 [2024-11-19 20:10:09.911213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.911319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.214 [2024-11-19 20:10:09.911332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.214 [2024-11-19 20:10:09.911342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.214 [2024-11-19 20:10:09.911351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.911368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.214 [2024-11-19 20:10:09.911378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.214 [2024-11-19 20:10:09.911387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.214 [2024-11-19 20:10:09.911396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.214 [2024-11-19 20:10:09.995566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.214 [2024-11-19 20:10:09.995627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.214 [2024-11-19 20:10:09.995640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.214 [2024-11-19 20:10:09.995649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.475 [2024-11-19 20:10:10.065454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.475 [2024-11-19 20:10:10.065549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.475 [2024-11-19 20:10:10.065642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.475 [2024-11-19 20:10:10.065778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.475 [2024-11-19 20:10:10.065837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.065904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.475 [2024-11-19 20:10:10.065913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.065921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.065988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.475 [2024-11-19 20:10:10.066064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.475 [2024-11-19 20:10:10.066073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.475 [2024-11-19 20:10:10.066082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.475 [2024-11-19 20:10:10.066239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.177 ms, result 0 00:20:37.048 00:20:37.048 00:20:37.048 20:10:10 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:39.597 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:39.597 20:10:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:39.597 [2024-11-19 20:10:13.082616] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:39.597 [2024-11-19 20:10:13.082737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76076 ] 00:20:39.597 [2024-11-19 20:10:13.241082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.597 [2024-11-19 20:10:13.342760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.858 [2024-11-19 20:10:13.631717] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.858 [2024-11-19 20:10:13.631804] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.121 [2024-11-19 20:10:13.794193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.121 [2024-11-19 20:10:13.794272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.121 [2024-11-19 20:10:13.794295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.121 [2024-11-19 20:10:13.794303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.121 [2024-11-19 20:10:13.794362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.121 [2024-11-19 20:10:13.794375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.121 [2024-11-19 20:10:13.794389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:40.121 [2024-11-19 20:10:13.794398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.121 [2024-11-19 20:10:13.794420] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.121 [2024-11-19 20:10:13.795167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.121 [2024-11-19 20:10:13.795204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.121 [2024-11-19 20:10:13.795214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.121 [2024-11-19 20:10:13.795239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:20:40.121 [2024-11-19 20:10:13.795248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.121 [2024-11-19 20:10:13.796976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.121 [2024-11-19 20:10:13.811569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.121 [2024-11-19 20:10:13.811627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.121 [2024-11-19 20:10:13.811642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.594 ms 00:20:40.121 [2024-11-19 20:10:13.811651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.121 [2024-11-19 20:10:13.811739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.121 [2024-11-19 20:10:13.811750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.122 [2024-11-19 20:10:13.811759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:40.122 [2024-11-19 20:10:13.811768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.820201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.820267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.122 [2024-11-19 20:10:13.820278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.352 ms 00:20:40.122 [2024-11-19 20:10:13.820287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.820376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.820385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.122 [2024-11-19 20:10:13.820394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:40.122 [2024-11-19 20:10:13.820402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.820448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.820460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.122 [2024-11-19 20:10:13.820468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.122 [2024-11-19 20:10:13.820476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.820501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.122 [2024-11-19 20:10:13.824545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.824588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.122 [2024-11-19 20:10:13.824599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:20:40.122 [2024-11-19 20:10:13.824610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.824646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.824655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.122 [2024-11-19 20:10:13.824663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:40.122 [2024-11-19 20:10:13.824671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.824727] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.122 [2024-11-19 20:10:13.824750] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.122 [2024-11-19 20:10:13.824787] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.122 [2024-11-19 20:10:13.824808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.122 [2024-11-19 20:10:13.824915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.122 [2024-11-19 20:10:13.824926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.122 [2024-11-19 20:10:13.824939] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.122 [2024-11-19 20:10:13.824950] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.122 [2024-11-19 20:10:13.824959] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.122 [2024-11-19 20:10:13.824967] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.122 [2024-11-19 20:10:13.824976] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.122 [2024-11-19 20:10:13.824984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.122 [2024-11-19 20:10:13.824993] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.122 [2024-11-19 20:10:13.825006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.825014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.122 [2024-11-19 20:10:13.825023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:40.122 [2024-11-19 20:10:13.825030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.825114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.122 [2024-11-19 20:10:13.825124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.122 [2024-11-19 20:10:13.825132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:40.122 [2024-11-19 20:10:13.825141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.122 [2024-11-19 20:10:13.825264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.122 [2024-11-19 20:10:13.825280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.122 [2024-11-19 20:10:13.825290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.122 [2024-11-19 20:10:13.825314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.122 [2024-11-19 20:10:13.825339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.122 [2024-11-19 20:10:13.825354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.122 [2024-11-19 20:10:13.825361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.122 [2024-11-19 20:10:13.825371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.122 [2024-11-19 20:10:13.825380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.122 [2024-11-19 20:10:13.825387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.122 [2024-11-19 20:10:13.825401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.122 [2024-11-19 20:10:13.825416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.122 [2024-11-19 20:10:13.825436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.122 [2024-11-19 20:10:13.825460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.122 [2024-11-19 20:10:13.825479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.122 [2024-11-19 20:10:13.825501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.122 [2024-11-19 20:10:13.825520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.122 [2024-11-19 20:10:13.825533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.122 [2024-11-19 20:10:13.825539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.122 [2024-11-19 20:10:13.825547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.122 [2024-11-19 20:10:13.825554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.122 [2024-11-19 20:10:13.825561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.122 [2024-11-19 20:10:13.825567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.122 [2024-11-19 20:10:13.825580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.122 [2024-11-19 20:10:13.825586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825593] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.122 [2024-11-19 20:10:13.825607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.122 [2024-11-19 20:10:13.825615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.122 [2024-11-19 20:10:13.825631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.122 [2024-11-19 20:10:13.825638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.122 [2024-11-19 20:10:13.825644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.122 [2024-11-19 20:10:13.825653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.122 [2024-11-19 20:10:13.825661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.122 [2024-11-19 20:10:13.825667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.122 [2024-11-19 20:10:13.825676] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.122 [2024-11-19 20:10:13.825685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.122 [2024-11-19 20:10:13.825694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.122 [2024-11-19 20:10:13.825701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.122 [2024-11-19 20:10:13.825710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.122 [2024-11-19 20:10:13.825718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.123 [2024-11-19 20:10:13.825726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.123 [2024-11-19 20:10:13.825733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.123 [2024-11-19 20:10:13.825740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.123 [2024-11-19 20:10:13.825747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.123 [2024-11-19 20:10:13.825754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.123 [2024-11-19 20:10:13.825762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.123 [2024-11-19 20:10:13.825800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.123 [2024-11-19 20:10:13.825813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.123 [2024-11-19 20:10:13.825830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.123 [2024-11-19 20:10:13.825837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.123 [2024-11-19 20:10:13.825844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.123 [2024-11-19 20:10:13.825851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.825862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.123 [2024-11-19 20:10:13.825871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:20:40.123 [2024-11-19 20:10:13.825878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.858297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.858447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.123 [2024-11-19 20:10:13.858460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.374 ms 00:20:40.123 [2024-11-19 20:10:13.858469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.858568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.858579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.123 [2024-11-19 20:10:13.858588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:40.123 [2024-11-19 20:10:13.858596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.901188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.901264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.123 [2024-11-19 20:10:13.901278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.530 ms 00:20:40.123 [2024-11-19 20:10:13.901287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.901339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.901350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.123 [2024-11-19 20:10:13.901360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.123 [2024-11-19 20:10:13.901372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.902018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.902057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.123 [2024-11-19 20:10:13.902070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:20:40.123 [2024-11-19 20:10:13.902078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.123 [2024-11-19 20:10:13.902260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.123 [2024-11-19 20:10:13.902275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.123 [2024-11-19 20:10:13.902284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:40.123 [2024-11-19 20:10:13.902298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.384 [2024-11-19 20:10:13.918265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.918315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.385 [2024-11-19 20:10:13.918330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.945 ms 00:20:40.385 [2024-11-19 20:10:13.918338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:13.932902] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.385 [2024-11-19 20:10:13.932960] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.385 [2024-11-19 20:10:13.932974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.932983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.385 [2024-11-19 20:10:13.932993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.523 ms 00:20:40.385 [2024-11-19 20:10:13.933001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:13.959383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.959445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.385 [2024-11-19 20:10:13.959457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.318 ms 00:20:40.385 [2024-11-19 20:10:13.959466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:13.972758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.972809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.385 [2024-11-19 20:10:13.972821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.233 ms 00:20:40.385 [2024-11-19 20:10:13.972829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:13.986118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.986172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.385 [2024-11-19 20:10:13.986185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.239 ms 00:20:40.385 [2024-11-19 20:10:13.986192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:13.986887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:13.986929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.385 [2024-11-19 20:10:13.986940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:40.385 [2024-11-19 20:10:13.986951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.053906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.053990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.385 [2024-11-19 20:10:14.054016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.934 ms 00:20:40.385 [2024-11-19 20:10:14.054025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.065506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.385 [2024-11-19 20:10:14.068765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.068814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.385 [2024-11-19 20:10:14.068827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.677 ms 00:20:40.385 [2024-11-19 20:10:14.068836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.068935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.068947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.385 [2024-11-19 20:10:14.068958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:40.385 [2024-11-19 20:10:14.068971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.069047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.069059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.385 [2024-11-19 20:10:14.069069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:40.385 [2024-11-19 20:10:14.069077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.069099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.069109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.385 [2024-11-19 20:10:14.069118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.385 [2024-11-19 20:10:14.069127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.069164] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.385 [2024-11-19 20:10:14.069177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.069186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.385 [2024-11-19 20:10:14.069194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:40.385 [2024-11-19 20:10:14.069204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.095701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.095759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.385 [2024-11-19 20:10:14.095774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.458 ms 00:20:40.385 [2024-11-19 20:10:14.095789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.095885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.385 [2024-11-19 20:10:14.095896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.385 [2024-11-19 20:10:14.095906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:40.385 [2024-11-19 20:10:14.095914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.385 [2024-11-19 20:10:14.097212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.526 ms, result 0 00:20:41.331  [2024-11-19T20:10:16.505Z] Copying: 8596/1048576 [kB] (8596 kBps) [2024-11-19T20:10:17.448Z] Copying: 41/1024 [MB] (33 MBps) [2024-11-19T20:10:18.393Z] Copying: 66/1024 [MB] (24 MBps) [2024-11-19T20:10:19.344Z] Copying: 89/1024 [MB] (22 MBps) [2024-11-19T20:10:20.287Z] Copying: 108/1024 [MB] (18 MBps) [2024-11-19T20:10:21.224Z] Copying: 128/1024 [MB] (20 MBps) [2024-11-19T20:10:22.170Z] Copying: 164/1024 [MB] (35 MBps) [2024-11-19T20:10:23.114Z] Copying: 187/1024 [MB] (23 MBps) [2024-11-19T20:10:24.503Z] Copying: 210/1024 [MB] (22 MBps) [2024-11-19T20:10:25.446Z] Copying: 229/1024 [MB] (19 MBps) [2024-11-19T20:10:26.391Z] Copying: 251/1024 [MB] (21 MBps) [2024-11-19T20:10:27.335Z] Copying: 268/1024 [MB] (16 MBps) [2024-11-19T20:10:28.280Z] Copying: 289/1024 [MB] (20 MBps) [2024-11-19T20:10:29.221Z] Copying: 310/1024 [MB] (21 MBps) [2024-11-19T20:10:30.160Z] Copying: 331/1024 [MB] (20 MBps) [2024-11-19T20:10:31.548Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-19T20:10:32.122Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-19T20:10:33.512Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-19T20:10:34.457Z] Copying: 379/1024 [MB] (10 MBps) [2024-11-19T20:10:35.403Z] Copying: 389/1024 [MB] (10 MBps) [2024-11-19T20:10:36.349Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-19T20:10:37.295Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-19T20:10:38.302Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-19T20:10:39.249Z] Copying: 434/1024 [MB] (13 MBps) [2024-11-19T20:10:40.194Z] Copying: 459/1024 [MB] (25 MBps) [2024-11-19T20:10:41.139Z] Copying: 476/1024 [MB] (17 MBps) [2024-11-19T20:10:42.530Z] Copying: 490/1024 [MB] (13 MBps) [2024-11-19T20:10:43.476Z] Copying: 510/1024 [MB] (19 MBps) [2024-11-19T20:10:44.421Z] Copying: 524/1024 [MB] (14 MBps) [2024-11-19T20:10:45.368Z] Copying: 541/1024 [MB] (16 MBps) [2024-11-19T20:10:46.313Z] Copying: 561/1024 [MB] (19 MBps) [2024-11-19T20:10:47.258Z] Copying: 581/1024 [MB] (19 MBps) [2024-11-19T20:10:48.200Z] Copying: 597/1024 [MB] (16 MBps) [2024-11-19T20:10:49.152Z] Copying: 614/1024 [MB] (17 MBps) [2024-11-19T20:10:50.541Z] Copying: 631/1024 [MB] (16 MBps) [2024-11-19T20:10:51.115Z] Copying: 648/1024 [MB] (17 MBps) [2024-11-19T20:10:52.501Z] Copying: 666/1024 [MB] (18 MBps) [2024-11-19T20:10:53.445Z] Copying: 682/1024 [MB] (16 MBps) [2024-11-19T20:10:54.388Z] Copying: 705/1024 [MB] (23 MBps) [2024-11-19T20:10:55.333Z] Copying: 720/1024 [MB] (14 MBps) [2024-11-19T20:10:56.278Z] Copying: 747640/1048576 [kB] (10064 kBps) [2024-11-19T20:10:57.223Z] Copying: 740/1024 [MB] (10 MBps) [2024-11-19T20:10:58.159Z] Copying: 768240/1048576 [kB] (10144 kBps) [2024-11-19T20:10:59.549Z] Copying: 784/1024 [MB] (34 MBps) [2024-11-19T20:11:00.123Z] Copying: 796/1024 [MB] (12 MBps) [2024-11-19T20:11:01.509Z] Copying: 807/1024 [MB] (10 MBps) [2024-11-19T20:11:02.451Z] Copying: 821/1024 [MB] (14 MBps) [2024-11-19T20:11:03.392Z] Copying: 837/1024 [MB] (15 MBps) [2024-11-19T20:11:04.337Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-19T20:11:05.283Z] Copying: 888/1024 [MB] (22 MBps) [2024-11-19T20:11:06.227Z] Copying: 902/1024 [MB] (13 MBps) [2024-11-19T20:11:07.211Z] Copying: 917/1024 [MB] (15 MBps) [2024-11-19T20:11:08.175Z] Copying: 936/1024 [MB] (19 MBps) [2024-11-19T20:11:09.121Z] Copying: 949/1024 [MB] (12 MBps) [2024-11-19T20:11:10.509Z] Copying: 964/1024 [MB] (15 MBps) [2024-11-19T20:11:11.454Z] Copying: 977/1024 [MB] (13 MBps) [2024-11-19T20:11:12.400Z] Copying: 991/1024 [MB] (13 MBps) [2024-11-19T20:11:13.346Z] Copying: 1010/1024 [MB] (18 MBps) [2024-11-19T20:11:13.919Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-19T20:11:13.919Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-19 20:11:13.690924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.125 [2024-11-19 20:11:13.691008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:40.125 [2024-11-19 20:11:13.691028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:40.125 [2024-11-19 20:11:13.691049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.693452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.126 [2024-11-19 20:11:13.700520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.700572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:40.126 [2024-11-19 20:11:13.700585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.007 ms 00:21:40.126 [2024-11-19 20:11:13.700594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.712123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.712178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:40.126 [2024-11-19 20:11:13.712191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.277 ms 00:21:40.126 [2024-11-19 20:11:13.712201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.735885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.735936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:40.126 [2024-11-19 20:11:13.735948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:21:40.126 [2024-11-19 20:11:13.735957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.742095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.742137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:40.126 [2024-11-19 20:11:13.742149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:21:40.126 [2024-11-19 20:11:13.742159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.768682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.768734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:40.126 [2024-11-19 20:11:13.768747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.376 ms 00:21:40.126 [2024-11-19 20:11:13.768756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.126 [2024-11-19 20:11:13.784800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.126 [2024-11-19 20:11:13.784871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:40.126 [2024-11-19 20:11:13.784885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.996 ms 00:21:40.126 [2024-11-19 20:11:13.784893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.389 [2024-11-19 20:11:14.079635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.389 [2024-11-19 20:11:14.079693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:40.389 [2024-11-19 20:11:14.079706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 294.687 ms 00:21:40.389 [2024-11-19 20:11:14.079715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.389 [2024-11-19 20:11:14.105778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.389 [2024-11-19 20:11:14.105825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:40.389 [2024-11-19 20:11:14.105837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.046 ms 00:21:40.389 [2024-11-19 20:11:14.105845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.389 [2024-11-19 20:11:14.131147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.389 [2024-11-19 20:11:14.131205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:40.389 [2024-11-19 20:11:14.131217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.255 ms 00:21:40.389 [2024-11-19 20:11:14.131240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.389 [2024-11-19 20:11:14.155581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.389 [2024-11-19 20:11:14.155624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:40.389 [2024-11-19 20:11:14.155635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.295 ms 00:21:40.389 [2024-11-19 20:11:14.155642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.652 [2024-11-19 20:11:14.180731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.652 [2024-11-19 20:11:14.180777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:40.652 [2024-11-19 20:11:14.180789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.016 ms 00:21:40.652 [2024-11-19 20:11:14.180798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.652 [2024-11-19 20:11:14.180842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:40.652 [2024-11-19 20:11:14.180859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104704 / 261120 wr_cnt: 1 state: open 00:21:40.652 [2024-11-19 20:11:14.180870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:40.652 [2024-11-19 20:11:14.180996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:40.653 [2024-11-19 20:11:14.181692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:40.653 [2024-11-19 20:11:14.181701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a40704e4-a446-4db0-82b2-c45640f57cbf 00:21:40.653 [2024-11-19 20:11:14.181709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104704 00:21:40.653 [2024-11-19 20:11:14.181719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105664 00:21:40.653 [2024-11-19 20:11:14.181728] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104704 00:21:40.653 [2024-11-19 20:11:14.181737] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:21:40.653 [2024-11-19 20:11:14.181744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:40.654 [2024-11-19 20:11:14.181758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:40.654 [2024-11-19 20:11:14.181774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:40.654 [2024-11-19 20:11:14.181781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:40.654 [2024-11-19 20:11:14.181788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:40.654 [2024-11-19 20:11:14.181796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.654 [2024-11-19 20:11:14.181805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:40.654 [2024-11-19 20:11:14.181814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:21:40.654 [2024-11-19 20:11:14.181822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.195477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.654 [2024-11-19 20:11:14.195520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:40.654 [2024-11-19 20:11:14.195531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.634 ms 00:21:40.654 [2024-11-19 20:11:14.195547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.195952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.654 [2024-11-19 20:11:14.195971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:40.654 [2024-11-19 20:11:14.195981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:21:40.654 [2024-11-19 20:11:14.195989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.232617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.232667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.654 [2024-11-19 20:11:14.232684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.232693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.232766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.232776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.654 [2024-11-19 20:11:14.232785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.232794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.232880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.232892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.654 [2024-11-19 20:11:14.232902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.232914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.232932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.232942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.654 [2024-11-19 20:11:14.232951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.232959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.316554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.316618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.654 [2024-11-19 20:11:14.316638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.316647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.654 [2024-11-19 20:11:14.385283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.654 [2024-11-19 20:11:14.385374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.654 [2024-11-19 20:11:14.385468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.654 [2024-11-19 20:11:14.385594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:40.654 [2024-11-19 20:11:14.385655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.654 [2024-11-19 20:11:14.385724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.654 [2024-11-19 20:11:14.385794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.654 [2024-11-19 20:11:14.385802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.654 [2024-11-19 20:11:14.385810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.654 [2024-11-19 20:11:14.385946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 696.813 ms, result 0 00:21:42.570 00:21:42.570 00:21:42.570 20:11:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:42.570 [2024-11-19 20:11:16.047005] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:21:42.570 [2024-11-19 20:11:16.047172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76716 ] 00:21:42.570 [2024-11-19 20:11:16.214494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.570 [2024-11-19 20:11:16.332803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.832 [2024-11-19 20:11:16.606657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:42.832 [2024-11-19 20:11:16.606742] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.095 [2024-11-19 20:11:16.767874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.767943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.095 [2024-11-19 20:11:16.767965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:43.095 [2024-11-19 20:11:16.767974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.768031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.768042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.095 [2024-11-19 20:11:16.768054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:43.095 [2024-11-19 20:11:16.768063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.768085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.095 [2024-11-19 20:11:16.769173] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.095 [2024-11-19 20:11:16.769261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.769273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.095 [2024-11-19 20:11:16.769284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:21:43.095 [2024-11-19 20:11:16.769292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.771021] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.095 [2024-11-19 20:11:16.785334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.785383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.095 [2024-11-19 20:11:16.785398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.315 ms 00:21:43.095 [2024-11-19 20:11:16.785407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.785488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.785498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.095 [2024-11-19 20:11:16.785508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:43.095 [2024-11-19 20:11:16.785516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.793768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.793812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.095 [2024-11-19 20:11:16.793823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.173 ms 00:21:43.095 [2024-11-19 20:11:16.793832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.793919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.793928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.095 [2024-11-19 20:11:16.793938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:43.095 [2024-11-19 20:11:16.793947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.793993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.794004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.095 [2024-11-19 20:11:16.794014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:43.095 [2024-11-19 20:11:16.794023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.794047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.095 [2024-11-19 20:11:16.798048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.798100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.095 [2024-11-19 20:11:16.798112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.007 ms 00:21:43.095 [2024-11-19 20:11:16.798123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.095 [2024-11-19 20:11:16.798161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.095 [2024-11-19 20:11:16.798170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.096 [2024-11-19 20:11:16.798179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:43.096 [2024-11-19 20:11:16.798188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.096 [2024-11-19 20:11:16.798254] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.096 [2024-11-19 20:11:16.798278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.096 [2024-11-19 20:11:16.798315] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.096 [2024-11-19 20:11:16.798333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.096 [2024-11-19 20:11:16.798440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.096 [2024-11-19 20:11:16.798452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.096 [2024-11-19 20:11:16.798463] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.096 [2024-11-19 20:11:16.798474] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798492] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:43.096 [2024-11-19 20:11:16.798500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.096 [2024-11-19 20:11:16.798515] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.096 [2024-11-19 20:11:16.798523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.096 [2024-11-19 20:11:16.798535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.096 [2024-11-19 20:11:16.798544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.096 [2024-11-19 20:11:16.798552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:43.096 [2024-11-19 20:11:16.798559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.096 [2024-11-19 20:11:16.798647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.096 [2024-11-19 20:11:16.798656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.096 [2024-11-19 20:11:16.798665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:43.096 [2024-11-19 20:11:16.798672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.096 [2024-11-19 20:11:16.798773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.096 [2024-11-19 20:11:16.798801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.096 [2024-11-19 20:11:16.798810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.096 [2024-11-19 20:11:16.798834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.096 [2024-11-19 20:11:16.798856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.096 [2024-11-19 20:11:16.798870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.096 [2024-11-19 20:11:16.798877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:43.096 [2024-11-19 20:11:16.798884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.096 [2024-11-19 20:11:16.798890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.096 [2024-11-19 20:11:16.798898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:43.096 [2024-11-19 20:11:16.798911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.096 [2024-11-19 20:11:16.798927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.096 [2024-11-19 20:11:16.798948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.096 [2024-11-19 20:11:16.798969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.096 [2024-11-19 20:11:16.798982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.096 [2024-11-19 20:11:16.798989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:43.096 [2024-11-19 20:11:16.798996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.096 [2024-11-19 20:11:16.799002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.096 [2024-11-19 20:11:16.799009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:43.096 [2024-11-19 20:11:16.799016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.096 [2024-11-19 20:11:16.799022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.096 [2024-11-19 20:11:16.799029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:43.096 [2024-11-19 20:11:16.799036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.096 [2024-11-19 20:11:16.799043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.096 [2024-11-19 20:11:16.799049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:43.096 [2024-11-19 20:11:16.799056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.096 [2024-11-19 20:11:16.799064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.096 [2024-11-19 20:11:16.799071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:43.096 [2024-11-19 20:11:16.799078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.799084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.096 [2024-11-19 20:11:16.799091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:43.096 [2024-11-19 20:11:16.799098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.799105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.096 [2024-11-19 20:11:16.799112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.096 [2024-11-19 20:11:16.799123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.096 [2024-11-19 20:11:16.799131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.096 [2024-11-19 20:11:16.799139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.096 [2024-11-19 20:11:16.799147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.096 [2024-11-19 20:11:16.799154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.096 [2024-11-19 20:11:16.799162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.096 [2024-11-19 20:11:16.799169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.096 [2024-11-19 20:11:16.799176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.096 [2024-11-19 20:11:16.799184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.096 [2024-11-19 20:11:16.799194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:43.096 [2024-11-19 20:11:16.799212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:43.096 [2024-11-19 20:11:16.799240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:43.096 [2024-11-19 20:11:16.799248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:43.096 [2024-11-19 20:11:16.799256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:43.096 [2024-11-19 20:11:16.799264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:43.096 [2024-11-19 20:11:16.799272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:43.096 [2024-11-19 20:11:16.799280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:43.096 [2024-11-19 20:11:16.799287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:43.096 [2024-11-19 20:11:16.799295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:43.096 [2024-11-19 20:11:16.799332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.096 [2024-11-19 20:11:16.799343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.096 [2024-11-19 20:11:16.799359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.096 [2024-11-19 20:11:16.799366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.097 [2024-11-19 20:11:16.799374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.097 [2024-11-19 20:11:16.799382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.799391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.097 [2024-11-19 20:11:16.799398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:21:43.097 [2024-11-19 20:11:16.799406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.831456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.831508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.097 [2024-11-19 20:11:16.831520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.006 ms 00:21:43.097 [2024-11-19 20:11:16.831528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.831633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.831641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.097 [2024-11-19 20:11:16.831650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:43.097 [2024-11-19 20:11:16.831659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.874300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.874360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.097 [2024-11-19 20:11:16.874374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.580 ms 00:21:43.097 [2024-11-19 20:11:16.874383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.874433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.874443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.097 [2024-11-19 20:11:16.874453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.097 [2024-11-19 20:11:16.874465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.875088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.875132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.097 [2024-11-19 20:11:16.875144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:43.097 [2024-11-19 20:11:16.875153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.097 [2024-11-19 20:11:16.875336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.097 [2024-11-19 20:11:16.875349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.097 [2024-11-19 20:11:16.875358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:21:43.097 [2024-11-19 20:11:16.875373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.891136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.891185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.359 [2024-11-19 20:11:16.891200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.742 ms 00:21:43.359 [2024-11-19 20:11:16.891208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.905638] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:43.359 [2024-11-19 20:11:16.905704] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.359 [2024-11-19 20:11:16.905719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.905728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.359 [2024-11-19 20:11:16.905738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.383 ms 00:21:43.359 [2024-11-19 20:11:16.905746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.931497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.931561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.359 [2024-11-19 20:11:16.931573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.689 ms 00:21:43.359 [2024-11-19 20:11:16.931581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.944724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.944779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.359 [2024-11-19 20:11:16.944791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.083 ms 00:21:43.359 [2024-11-19 20:11:16.944799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.957385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.957434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.359 [2024-11-19 20:11:16.957445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.539 ms 00:21:43.359 [2024-11-19 20:11:16.957452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:16.958120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:16.958152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.359 [2024-11-19 20:11:16.958163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:43.359 [2024-11-19 20:11:16.958175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.023080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.023146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.359 [2024-11-19 20:11:17.023170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.885 ms 00:21:43.359 [2024-11-19 20:11:17.023179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.034391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:43.359 [2024-11-19 20:11:17.037318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.037358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.359 [2024-11-19 20:11:17.037370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:21:43.359 [2024-11-19 20:11:17.037379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.037467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.037479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.359 [2024-11-19 20:11:17.037489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:43.359 [2024-11-19 20:11:17.037501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.039251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.039297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.359 [2024-11-19 20:11:17.039309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:21:43.359 [2024-11-19 20:11:17.039318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.039348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.039356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.359 [2024-11-19 20:11:17.039366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.359 [2024-11-19 20:11:17.039374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.039417] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.359 [2024-11-19 20:11:17.039432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.039441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.359 [2024-11-19 20:11:17.039450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:43.359 [2024-11-19 20:11:17.039459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.065272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.065328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.359 [2024-11-19 20:11:17.065341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.794 ms 00:21:43.359 [2024-11-19 20:11:17.065356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.065445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.359 [2024-11-19 20:11:17.065455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.359 [2024-11-19 20:11:17.065465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:43.359 [2024-11-19 20:11:17.065474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.359 [2024-11-19 20:11:17.067064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.697 ms, result 0 00:21:44.748  [2024-11-19T20:11:19.483Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-19T20:11:20.427Z] Copying: 32/1024 [MB] (14 MBps) [2024-11-19T20:11:21.369Z] Copying: 45/1024 [MB] (13 MBps) [2024-11-19T20:11:22.312Z] Copying: 58/1024 [MB] (13 MBps) [2024-11-19T20:11:23.258Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-19T20:11:24.648Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-19T20:11:25.589Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-19T20:11:26.535Z] Copying: 102/1024 [MB] (12 MBps) [2024-11-19T20:11:27.481Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-19T20:11:28.426Z] Copying: 129/1024 [MB] (14 MBps) [2024-11-19T20:11:29.372Z] Copying: 140/1024 [MB] (11 MBps) [2024-11-19T20:11:30.318Z] Copying: 152/1024 [MB] (12 MBps) [2024-11-19T20:11:31.263Z] Copying: 164/1024 [MB] (11 MBps) [2024-11-19T20:11:32.654Z] Copying: 175/1024 [MB] (11 MBps) [2024-11-19T20:11:33.600Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-19T20:11:34.546Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-19T20:11:35.491Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-19T20:11:36.470Z] Copying: 224/1024 [MB] (15 MBps) [2024-11-19T20:11:37.412Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-19T20:11:38.357Z] Copying: 253/1024 [MB] (18 MBps) [2024-11-19T20:11:39.302Z] Copying: 268/1024 [MB] (14 MBps) [2024-11-19T20:11:40.686Z] Copying: 284/1024 [MB] (15 MBps) [2024-11-19T20:11:41.627Z] Copying: 297/1024 [MB] (13 MBps) [2024-11-19T20:11:42.573Z] Copying: 319/1024 [MB] (22 MBps) [2024-11-19T20:11:43.519Z] Copying: 330/1024 [MB] (10 MBps) [2024-11-19T20:11:44.464Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-19T20:11:45.407Z] Copying: 351/1024 [MB] (10 MBps) [2024-11-19T20:11:46.349Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-19T20:11:47.291Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-19T20:11:48.680Z] Copying: 395/1024 [MB] (22 MBps) [2024-11-19T20:11:49.624Z] Copying: 408/1024 [MB] (13 MBps) [2024-11-19T20:11:50.568Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-19T20:11:51.515Z] Copying: 429/1024 [MB] (10 MBps) [2024-11-19T20:11:52.461Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-19T20:11:53.405Z] Copying: 458/1024 [MB] (17 MBps) [2024-11-19T20:11:54.348Z] Copying: 472/1024 [MB] (14 MBps) [2024-11-19T20:11:55.295Z] Copying: 485/1024 [MB] (12 MBps) [2024-11-19T20:11:56.685Z] Copying: 503/1024 [MB] (18 MBps) [2024-11-19T20:11:57.260Z] Copying: 520/1024 [MB] (17 MBps) [2024-11-19T20:11:58.645Z] Copying: 538/1024 [MB] (17 MBps) [2024-11-19T20:11:59.589Z] Copying: 558/1024 [MB] (19 MBps) [2024-11-19T20:12:00.534Z] Copying: 579/1024 [MB] (21 MBps) [2024-11-19T20:12:01.477Z] Copying: 597/1024 [MB] (17 MBps) [2024-11-19T20:12:02.419Z] Copying: 617/1024 [MB] (20 MBps) [2024-11-19T20:12:03.363Z] Copying: 639/1024 [MB] (21 MBps) [2024-11-19T20:12:04.308Z] Copying: 660/1024 [MB] (21 MBps) [2024-11-19T20:12:05.331Z] Copying: 682/1024 [MB] (21 MBps) [2024-11-19T20:12:06.276Z] Copying: 701/1024 [MB] (19 MBps) [2024-11-19T20:12:07.664Z] Copying: 725/1024 [MB] (23 MBps) [2024-11-19T20:12:08.610Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-19T20:12:09.554Z] Copying: 766/1024 [MB] (24 MBps) [2024-11-19T20:12:10.498Z] Copying: 787/1024 [MB] (20 MBps) [2024-11-19T20:12:11.444Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-19T20:12:12.390Z] Copying: 815/1024 [MB] (15 MBps) [2024-11-19T20:12:13.336Z] Copying: 833/1024 [MB] (18 MBps) [2024-11-19T20:12:14.282Z] Copying: 851/1024 [MB] (18 MBps) [2024-11-19T20:12:15.669Z] Copying: 869/1024 [MB] (17 MBps) [2024-11-19T20:12:16.610Z] Copying: 890/1024 [MB] (21 MBps) [2024-11-19T20:12:17.554Z] Copying: 906/1024 [MB] (15 MBps) [2024-11-19T20:12:18.498Z] Copying: 924/1024 [MB] (18 MBps) [2024-11-19T20:12:19.443Z] Copying: 943/1024 [MB] (19 MBps) [2024-11-19T20:12:20.384Z] Copying: 964/1024 [MB] (20 MBps) [2024-11-19T20:12:21.329Z] Copying: 984/1024 [MB] (20 MBps) [2024-11-19T20:12:22.269Z] Copying: 1002/1024 [MB] (17 MBps) [2024-11-19T20:12:23.216Z] Copying: 1016/1024 [MB] (14 MBps) [2024-11-19T20:12:23.216Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 20:12:23.038675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.038760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:49.422 [2024-11-19 20:12:23.038778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.422 [2024-11-19 20:12:23.038789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.038832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:49.422 [2024-11-19 20:12:23.042287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.042336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:49.422 [2024-11-19 20:12:23.042349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:22:49.422 [2024-11-19 20:12:23.042359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.043517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.043543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:49.422 [2024-11-19 20:12:23.043555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:22:49.422 [2024-11-19 20:12:23.043564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.049959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.050004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:49.422 [2024-11-19 20:12:23.050017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:22:49.422 [2024-11-19 20:12:23.050028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.056398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.056430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:49.422 [2024-11-19 20:12:23.056447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.322 ms 00:22:49.422 [2024-11-19 20:12:23.056462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.083751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.083800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:49.422 [2024-11-19 20:12:23.083819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.215 ms 00:22:49.422 [2024-11-19 20:12:23.083830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.422 [2024-11-19 20:12:23.099552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.422 [2024-11-19 20:12:23.099600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:49.422 [2024-11-19 20:12:23.099618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.663 ms 00:22:49.422 [2024-11-19 20:12:23.099630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.684 [2024-11-19 20:12:23.474809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.684 [2024-11-19 20:12:23.474895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:49.684 [2024-11-19 20:12:23.474918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 375.113 ms 00:22:49.684 [2024-11-19 20:12:23.474932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.946 [2024-11-19 20:12:23.500722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.946 [2024-11-19 20:12:23.500779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:49.946 [2024-11-19 20:12:23.500799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.765 ms 00:22:49.946 [2024-11-19 20:12:23.500811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.946 [2024-11-19 20:12:23.526489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.946 [2024-11-19 20:12:23.526544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:49.946 [2024-11-19 20:12:23.526577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:22:49.946 [2024-11-19 20:12:23.526590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.946 [2024-11-19 20:12:23.551386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.946 [2024-11-19 20:12:23.551444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:49.946 [2024-11-19 20:12:23.551464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.678 ms 00:22:49.946 [2024-11-19 20:12:23.551476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.946 [2024-11-19 20:12:23.575998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.946 [2024-11-19 20:12:23.576051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:49.946 [2024-11-19 20:12:23.576069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.408 ms 00:22:49.946 [2024-11-19 20:12:23.576081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.946 [2024-11-19 20:12:23.576136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:49.946 [2024-11-19 20:12:23.576158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:49.946 [2024-11-19 20:12:23.576174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:49.946 [2024-11-19 20:12:23.576791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.576998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:49.947 [2024-11-19 20:12:23.577564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:49.947 [2024-11-19 20:12:23.577578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a40704e4-a446-4db0-82b2-c45640f57cbf 00:22:49.947 [2024-11-19 20:12:23.577591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:49.947 [2024-11-19 20:12:23.577604] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27328 00:22:49.947 [2024-11-19 20:12:23.577617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26368 00:22:49.947 [2024-11-19 20:12:23.577631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0364 00:22:49.947 [2024-11-19 20:12:23.577650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:49.947 [2024-11-19 20:12:23.577671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:49.947 [2024-11-19 20:12:23.577685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:49.947 [2024-11-19 20:12:23.577705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:49.947 [2024-11-19 20:12:23.577716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:49.947 [2024-11-19 20:12:23.577730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.947 [2024-11-19 20:12:23.577743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:49.947 [2024-11-19 20:12:23.577758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:22:49.947 [2024-11-19 20:12:23.577771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.591402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.947 [2024-11-19 20:12:23.591454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:49.947 [2024-11-19 20:12:23.591472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.598 ms 00:22:49.947 [2024-11-19 20:12:23.591493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.591976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.947 [2024-11-19 20:12:23.592021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:49.947 [2024-11-19 20:12:23.592037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:22:49.947 [2024-11-19 20:12:23.592051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.628373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.947 [2024-11-19 20:12:23.628436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.947 [2024-11-19 20:12:23.628460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.947 [2024-11-19 20:12:23.628475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.628566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.947 [2024-11-19 20:12:23.628581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.947 [2024-11-19 20:12:23.628597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.947 [2024-11-19 20:12:23.628613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.628701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.947 [2024-11-19 20:12:23.628734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.947 [2024-11-19 20:12:23.628749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.947 [2024-11-19 20:12:23.628769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.628794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.947 [2024-11-19 20:12:23.628814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.947 [2024-11-19 20:12:23.628829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.947 [2024-11-19 20:12:23.628844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.947 [2024-11-19 20:12:23.713569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.947 [2024-11-19 20:12:23.713633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.947 [2024-11-19 20:12:23.713659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.948 [2024-11-19 20:12:23.713672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.782805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.782870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.209 [2024-11-19 20:12:23.782888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.782901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.209 [2024-11-19 20:12:23.783058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.209 [2024-11-19 20:12:23.783186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.209 [2024-11-19 20:12:23.783410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:50.209 [2024-11-19 20:12:23.783507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.209 [2024-11-19 20:12:23.783602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.209 [2024-11-19 20:12:23.783701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.209 [2024-11-19 20:12:23.783722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.209 [2024-11-19 20:12:23.783736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.209 [2024-11-19 20:12:23.783899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 745.186 ms, result 0 00:22:50.781 00:22:50.781 00:22:50.781 20:12:24 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:53.332 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74524 00:22:53.332 20:12:26 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74524 ']' 00:22:53.332 20:12:26 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74524 00:22:53.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74524) - No such process 00:22:53.332 Process with pid 74524 is not found 00:22:53.332 20:12:26 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74524 is not found' 00:22:53.332 Remove shared memory files 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:53.332 20:12:26 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:53.332 ************************************ 00:22:53.332 END TEST ftl_restore 00:22:53.332 ************************************ 00:22:53.332 00:22:53.332 real 4m39.718s 00:22:53.332 user 4m26.527s 00:22:53.332 sys 0m12.941s 00:22:53.332 20:12:26 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.332 20:12:26 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:53.332 20:12:26 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:53.332 20:12:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:53.332 20:12:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.332 20:12:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:53.332 ************************************ 00:22:53.332 START TEST ftl_dirty_shutdown 00:22:53.332 ************************************ 00:22:53.332 20:12:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:53.332 * Looking for test storage... 00:22:53.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:53.332 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.332 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.332 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.594 --rc genhtml_branch_coverage=1 00:22:53.594 --rc genhtml_function_coverage=1 00:22:53.594 --rc genhtml_legend=1 00:22:53.594 --rc geninfo_all_blocks=1 00:22:53.594 --rc geninfo_unexecuted_blocks=1 00:22:53.594 00:22:53.594 ' 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.594 --rc genhtml_branch_coverage=1 00:22:53.594 --rc genhtml_function_coverage=1 00:22:53.594 --rc genhtml_legend=1 00:22:53.594 --rc geninfo_all_blocks=1 00:22:53.594 --rc geninfo_unexecuted_blocks=1 00:22:53.594 00:22:53.594 ' 00:22:53.594 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.594 --rc genhtml_branch_coverage=1 00:22:53.594 --rc genhtml_function_coverage=1 00:22:53.594 --rc genhtml_legend=1 00:22:53.594 --rc geninfo_all_blocks=1 00:22:53.594 --rc geninfo_unexecuted_blocks=1 00:22:53.594 00:22:53.595 ' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.595 --rc genhtml_branch_coverage=1 00:22:53.595 --rc genhtml_function_coverage=1 00:22:53.595 --rc genhtml_legend=1 00:22:53.595 --rc geninfo_all_blocks=1 00:22:53.595 --rc geninfo_unexecuted_blocks=1 00:22:53.595 00:22:53.595 ' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77517 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77517 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77517 ']' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.595 20:12:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.595 [2024-11-19 20:12:27.262798] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:53.595 [2024-11-19 20:12:27.263316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77517 ] 00:22:53.857 [2024-11-19 20:12:27.425808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.857 [2024-11-19 20:12:27.542628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:54.801 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:54.802 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:55.064 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:55.064 { 00:22:55.064 "name": "nvme0n1", 00:22:55.064 "aliases": [ 00:22:55.064 "c5f1e2eb-7366-40e6-b32b-6878a52750e6" 00:22:55.064 ], 00:22:55.064 "product_name": "NVMe disk", 00:22:55.064 "block_size": 4096, 00:22:55.064 "num_blocks": 1310720, 00:22:55.064 "uuid": "c5f1e2eb-7366-40e6-b32b-6878a52750e6", 00:22:55.064 "numa_id": -1, 00:22:55.064 "assigned_rate_limits": { 00:22:55.064 "rw_ios_per_sec": 0, 00:22:55.064 "rw_mbytes_per_sec": 0, 00:22:55.064 "r_mbytes_per_sec": 0, 00:22:55.064 "w_mbytes_per_sec": 0 00:22:55.064 }, 00:22:55.064 "claimed": true, 00:22:55.064 "claim_type": "read_many_write_one", 00:22:55.064 "zoned": false, 00:22:55.064 "supported_io_types": { 00:22:55.064 "read": true, 00:22:55.064 "write": true, 00:22:55.064 "unmap": true, 00:22:55.064 "flush": true, 00:22:55.064 "reset": true, 00:22:55.064 "nvme_admin": true, 00:22:55.064 "nvme_io": true, 00:22:55.064 "nvme_io_md": false, 00:22:55.064 "write_zeroes": true, 00:22:55.064 "zcopy": false, 00:22:55.064 "get_zone_info": false, 00:22:55.064 "zone_management": false, 00:22:55.064 "zone_append": false, 00:22:55.064 "compare": true, 00:22:55.064 "compare_and_write": false, 00:22:55.064 "abort": true, 00:22:55.064 "seek_hole": false, 00:22:55.064 "seek_data": false, 00:22:55.064 "copy": true, 00:22:55.064 "nvme_iov_md": false 00:22:55.064 }, 00:22:55.064 "driver_specific": { 00:22:55.064 "nvme": [ 00:22:55.064 { 00:22:55.064 "pci_address": "0000:00:11.0", 00:22:55.064 "trid": { 00:22:55.064 "trtype": "PCIe", 00:22:55.064 "traddr": "0000:00:11.0" 00:22:55.064 }, 00:22:55.064 "ctrlr_data": { 00:22:55.064 "cntlid": 0, 00:22:55.064 "vendor_id": "0x1b36", 00:22:55.064 "model_number": "QEMU NVMe Ctrl", 00:22:55.064 "serial_number": "12341", 00:22:55.064 "firmware_revision": "8.0.0", 00:22:55.064 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:55.064 "oacs": { 00:22:55.064 "security": 0, 00:22:55.064 "format": 1, 00:22:55.064 "firmware": 0, 00:22:55.064 "ns_manage": 1 00:22:55.064 }, 00:22:55.064 "multi_ctrlr": false, 00:22:55.064 "ana_reporting": false 00:22:55.064 }, 00:22:55.064 "vs": { 00:22:55.065 "nvme_version": "1.4" 00:22:55.065 }, 00:22:55.065 "ns_data": { 00:22:55.065 "id": 1, 00:22:55.065 "can_share": false 00:22:55.065 } 00:22:55.065 } 00:22:55.065 ], 00:22:55.065 "mp_policy": "active_passive" 00:22:55.065 } 00:22:55.065 } 00:22:55.065 ]' 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:55.065 20:12:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:55.327 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8fb33f84-d81b-41bd-8ca2-39e322b7681d 00:22:55.327 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:55.327 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fb33f84-d81b-41bd-8ca2-39e322b7681d 00:22:55.590 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:55.851 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9458583c-d6aa-40e2-b1b5-2fadeed8823b 00:22:55.852 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9458583c-d6aa-40e2-b1b5-2fadeed8823b 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:56.114 20:12:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:56.376 { 00:22:56.376 "name": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:56.376 "aliases": [ 00:22:56.376 "lvs/nvme0n1p0" 00:22:56.376 ], 00:22:56.376 "product_name": "Logical Volume", 00:22:56.376 "block_size": 4096, 00:22:56.376 "num_blocks": 26476544, 00:22:56.376 "uuid": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:56.376 "assigned_rate_limits": { 00:22:56.376 "rw_ios_per_sec": 0, 00:22:56.376 "rw_mbytes_per_sec": 0, 00:22:56.376 "r_mbytes_per_sec": 0, 00:22:56.376 "w_mbytes_per_sec": 0 00:22:56.376 }, 00:22:56.376 "claimed": false, 00:22:56.376 "zoned": false, 00:22:56.376 "supported_io_types": { 00:22:56.376 "read": true, 00:22:56.376 "write": true, 00:22:56.376 "unmap": true, 00:22:56.376 "flush": false, 00:22:56.376 "reset": true, 00:22:56.376 "nvme_admin": false, 00:22:56.376 "nvme_io": false, 00:22:56.376 "nvme_io_md": false, 00:22:56.376 "write_zeroes": true, 00:22:56.376 "zcopy": false, 00:22:56.376 "get_zone_info": false, 00:22:56.376 "zone_management": false, 00:22:56.376 "zone_append": false, 00:22:56.376 "compare": false, 00:22:56.376 "compare_and_write": false, 00:22:56.376 "abort": false, 00:22:56.376 "seek_hole": true, 00:22:56.376 "seek_data": true, 00:22:56.376 "copy": false, 00:22:56.376 "nvme_iov_md": false 00:22:56.376 }, 00:22:56.376 "driver_specific": { 00:22:56.376 "lvol": { 00:22:56.376 "lvol_store_uuid": "9458583c-d6aa-40e2-b1b5-2fadeed8823b", 00:22:56.376 "base_bdev": "nvme0n1", 00:22:56.376 "thin_provision": true, 00:22:56.376 "num_allocated_clusters": 0, 00:22:56.376 "snapshot": false, 00:22:56.376 "clone": false, 00:22:56.376 "esnap_clone": false 00:22:56.376 } 00:22:56.376 } 00:22:56.376 } 00:22:56.376 ]' 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:56.376 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:56.636 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:56.894 { 00:22:56.894 "name": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:56.894 "aliases": [ 00:22:56.894 "lvs/nvme0n1p0" 00:22:56.894 ], 00:22:56.894 "product_name": "Logical Volume", 00:22:56.894 "block_size": 4096, 00:22:56.894 "num_blocks": 26476544, 00:22:56.894 "uuid": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:56.894 "assigned_rate_limits": { 00:22:56.894 "rw_ios_per_sec": 0, 00:22:56.894 "rw_mbytes_per_sec": 0, 00:22:56.894 "r_mbytes_per_sec": 0, 00:22:56.894 "w_mbytes_per_sec": 0 00:22:56.894 }, 00:22:56.894 "claimed": false, 00:22:56.894 "zoned": false, 00:22:56.894 "supported_io_types": { 00:22:56.894 "read": true, 00:22:56.894 "write": true, 00:22:56.894 "unmap": true, 00:22:56.894 "flush": false, 00:22:56.894 "reset": true, 00:22:56.894 "nvme_admin": false, 00:22:56.894 "nvme_io": false, 00:22:56.894 "nvme_io_md": false, 00:22:56.894 "write_zeroes": true, 00:22:56.894 "zcopy": false, 00:22:56.894 "get_zone_info": false, 00:22:56.894 "zone_management": false, 00:22:56.894 "zone_append": false, 00:22:56.894 "compare": false, 00:22:56.894 "compare_and_write": false, 00:22:56.894 "abort": false, 00:22:56.894 "seek_hole": true, 00:22:56.894 "seek_data": true, 00:22:56.894 "copy": false, 00:22:56.894 "nvme_iov_md": false 00:22:56.894 }, 00:22:56.894 "driver_specific": { 00:22:56.894 "lvol": { 00:22:56.894 "lvol_store_uuid": "9458583c-d6aa-40e2-b1b5-2fadeed8823b", 00:22:56.894 "base_bdev": "nvme0n1", 00:22:56.894 "thin_provision": true, 00:22:56.894 "num_allocated_clusters": 0, 00:22:56.894 "snapshot": false, 00:22:56.894 "clone": false, 00:22:56.894 "esnap_clone": false 00:22:56.894 } 00:22:56.894 } 00:22:56.894 } 00:22:56.894 ]' 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:56.894 20:12:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:57.153 20:12:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:57.412 { 00:22:57.412 "name": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:57.412 "aliases": [ 00:22:57.412 "lvs/nvme0n1p0" 00:22:57.412 ], 00:22:57.412 "product_name": "Logical Volume", 00:22:57.412 "block_size": 4096, 00:22:57.412 "num_blocks": 26476544, 00:22:57.412 "uuid": "f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4", 00:22:57.412 "assigned_rate_limits": { 00:22:57.412 "rw_ios_per_sec": 0, 00:22:57.412 "rw_mbytes_per_sec": 0, 00:22:57.412 "r_mbytes_per_sec": 0, 00:22:57.412 "w_mbytes_per_sec": 0 00:22:57.412 }, 00:22:57.412 "claimed": false, 00:22:57.412 "zoned": false, 00:22:57.412 "supported_io_types": { 00:22:57.412 "read": true, 00:22:57.412 "write": true, 00:22:57.412 "unmap": true, 00:22:57.412 "flush": false, 00:22:57.412 "reset": true, 00:22:57.412 "nvme_admin": false, 00:22:57.412 "nvme_io": false, 00:22:57.412 "nvme_io_md": false, 00:22:57.412 "write_zeroes": true, 00:22:57.412 "zcopy": false, 00:22:57.412 "get_zone_info": false, 00:22:57.412 "zone_management": false, 00:22:57.412 "zone_append": false, 00:22:57.412 "compare": false, 00:22:57.412 "compare_and_write": false, 00:22:57.412 "abort": false, 00:22:57.412 "seek_hole": true, 00:22:57.412 "seek_data": true, 00:22:57.412 "copy": false, 00:22:57.412 "nvme_iov_md": false 00:22:57.412 }, 00:22:57.412 "driver_specific": { 00:22:57.412 "lvol": { 00:22:57.412 "lvol_store_uuid": "9458583c-d6aa-40e2-b1b5-2fadeed8823b", 00:22:57.412 "base_bdev": "nvme0n1", 00:22:57.412 "thin_provision": true, 00:22:57.412 "num_allocated_clusters": 0, 00:22:57.412 "snapshot": false, 00:22:57.412 "clone": false, 00:22:57.412 "esnap_clone": false 00:22:57.412 } 00:22:57.412 } 00:22:57.412 } 00:22:57.412 ]' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 --l2p_dram_limit 10' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:57.412 20:12:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f0f8a9ff-dba3-45a9-97e5-ec6d498d00c4 --l2p_dram_limit 10 -c nvc0n1p0 00:22:57.672 [2024-11-19 20:12:31.285563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.285599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:57.672 [2024-11-19 20:12:31.285612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:57.672 [2024-11-19 20:12:31.285618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.285658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.285666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.672 [2024-11-19 20:12:31.285674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:57.672 [2024-11-19 20:12:31.285679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.285699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:57.672 [2024-11-19 20:12:31.286240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:57.672 [2024-11-19 20:12:31.286257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.286263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.672 [2024-11-19 20:12:31.286272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:22:57.672 [2024-11-19 20:12:31.286278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.286303] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 29e552cc-be1f-4a80-a058-31f7183132d9 00:22:57.672 [2024-11-19 20:12:31.287306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.287328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:57.672 [2024-11-19 20:12:31.287336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:57.672 [2024-11-19 20:12:31.287343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.291967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.291996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.672 [2024-11-19 20:12:31.292006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.553 ms 00:22:57.672 [2024-11-19 20:12:31.292014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.292080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.292090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.672 [2024-11-19 20:12:31.292096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:57.672 [2024-11-19 20:12:31.292106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.292134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.292142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:57.672 [2024-11-19 20:12:31.292149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:57.672 [2024-11-19 20:12:31.292158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.292174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:57.672 [2024-11-19 20:12:31.295004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.295117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.672 [2024-11-19 20:12:31.295135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:22:57.672 [2024-11-19 20:12:31.295141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.295169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.672 [2024-11-19 20:12:31.295175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:57.672 [2024-11-19 20:12:31.295183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:57.672 [2024-11-19 20:12:31.295189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.672 [2024-11-19 20:12:31.295235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:57.672 [2024-11-19 20:12:31.295341] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:57.672 [2024-11-19 20:12:31.295352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:57.672 [2024-11-19 20:12:31.295360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:57.673 [2024-11-19 20:12:31.295370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295376] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295383] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:57.673 [2024-11-19 20:12:31.295390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:57.673 [2024-11-19 20:12:31.295398] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:57.673 [2024-11-19 20:12:31.295403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:57.673 [2024-11-19 20:12:31.295410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.673 [2024-11-19 20:12:31.295416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:57.673 [2024-11-19 20:12:31.295423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:22:57.673 [2024-11-19 20:12:31.295432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.673 [2024-11-19 20:12:31.295498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.673 [2024-11-19 20:12:31.295504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:57.673 [2024-11-19 20:12:31.295511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:57.673 [2024-11-19 20:12:31.295516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.673 [2024-11-19 20:12:31.295595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:57.673 [2024-11-19 20:12:31.295603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:57.673 [2024-11-19 20:12:31.295610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:57.673 [2024-11-19 20:12:31.295628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:57.673 [2024-11-19 20:12:31.295650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.673 [2024-11-19 20:12:31.295661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:57.673 [2024-11-19 20:12:31.295666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:57.673 [2024-11-19 20:12:31.295673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.673 [2024-11-19 20:12:31.295678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:57.673 [2024-11-19 20:12:31.295684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:57.673 [2024-11-19 20:12:31.295689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:57.673 [2024-11-19 20:12:31.295702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:57.673 [2024-11-19 20:12:31.295722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:57.673 [2024-11-19 20:12:31.295738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:57.673 [2024-11-19 20:12:31.295755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:57.673 [2024-11-19 20:12:31.295770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:57.673 [2024-11-19 20:12:31.295788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.673 [2024-11-19 20:12:31.295800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:57.673 [2024-11-19 20:12:31.295806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:57.673 [2024-11-19 20:12:31.295812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.673 [2024-11-19 20:12:31.295817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:57.673 [2024-11-19 20:12:31.295823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:57.673 [2024-11-19 20:12:31.295827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:57.673 [2024-11-19 20:12:31.295840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:57.673 [2024-11-19 20:12:31.295846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:57.673 [2024-11-19 20:12:31.295858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:57.673 [2024-11-19 20:12:31.295863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.673 [2024-11-19 20:12:31.295877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:57.673 [2024-11-19 20:12:31.295885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:57.673 [2024-11-19 20:12:31.295891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:57.673 [2024-11-19 20:12:31.295897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:57.673 [2024-11-19 20:12:31.295903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:57.673 [2024-11-19 20:12:31.295910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:57.673 [2024-11-19 20:12:31.295917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:57.673 [2024-11-19 20:12:31.295926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.295933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:57.673 [2024-11-19 20:12:31.295940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:57.673 [2024-11-19 20:12:31.295946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:57.673 [2024-11-19 20:12:31.295953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:57.673 [2024-11-19 20:12:31.295958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:57.673 [2024-11-19 20:12:31.295965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:57.673 [2024-11-19 20:12:31.295970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:57.673 [2024-11-19 20:12:31.295976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:57.673 [2024-11-19 20:12:31.295981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:57.673 [2024-11-19 20:12:31.295989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.295994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.296000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.296006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.296014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:57.673 [2024-11-19 20:12:31.296020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:57.673 [2024-11-19 20:12:31.296027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.296033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:57.673 [2024-11-19 20:12:31.296041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:57.673 [2024-11-19 20:12:31.296046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:57.673 [2024-11-19 20:12:31.296053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:57.673 [2024-11-19 20:12:31.296058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.673 [2024-11-19 20:12:31.296065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:57.673 [2024-11-19 20:12:31.296071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:22:57.673 [2024-11-19 20:12:31.296078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.673 [2024-11-19 20:12:31.296117] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:57.673 [2024-11-19 20:12:31.296128] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:01.951 [2024-11-19 20:12:34.916686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.917011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:01.951 [2024-11-19 20:12:34.917040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3620.552 ms 00:23:01.951 [2024-11-19 20:12:34.917053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.948143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.948418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.951 [2024-11-19 20:12:34.948442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.823 ms 00:23:01.951 [2024-11-19 20:12:34.948455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.948599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.948613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:01.951 [2024-11-19 20:12:34.948623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:01.951 [2024-11-19 20:12:34.948637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.983610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.983818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.951 [2024-11-19 20:12:34.983837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.934 ms 00:23:01.951 [2024-11-19 20:12:34.983849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.983887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.983905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.951 [2024-11-19 20:12:34.983915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:01.951 [2024-11-19 20:12:34.983926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.984513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.984539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.951 [2024-11-19 20:12:34.984550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:23:01.951 [2024-11-19 20:12:34.984563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:34.984679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:34.984692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.951 [2024-11-19 20:12:34.984704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:01.951 [2024-11-19 20:12:34.984718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.001930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.001979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.951 [2024-11-19 20:12:35.001990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.192 ms 00:23:01.951 [2024-11-19 20:12:35.002002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.015183] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:01.951 [2024-11-19 20:12:35.018902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.018946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:01.951 [2024-11-19 20:12:35.018959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:23:01.951 [2024-11-19 20:12:35.018968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.112776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.112939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:01.951 [2024-11-19 20:12:35.112963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.776 ms 00:23:01.951 [2024-11-19 20:12:35.112971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.113131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.113143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:01.951 [2024-11-19 20:12:35.113155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:01.951 [2024-11-19 20:12:35.113162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.132676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.132711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:01.951 [2024-11-19 20:12:35.132723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.473 ms 00:23:01.951 [2024-11-19 20:12:35.132730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.150927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.150955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:01.951 [2024-11-19 20:12:35.150965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:23:01.951 [2024-11-19 20:12:35.150971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.151462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.151472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:01.951 [2024-11-19 20:12:35.151481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:23:01.951 [2024-11-19 20:12:35.151488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.211043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.211146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:01.951 [2024-11-19 20:12:35.211164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.514 ms 00:23:01.951 [2024-11-19 20:12:35.211174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.230463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.230490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:01.951 [2024-11-19 20:12:35.230499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.225 ms 00:23:01.951 [2024-11-19 20:12:35.230506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.248657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.248683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:01.951 [2024-11-19 20:12:35.248692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:23:01.951 [2024-11-19 20:12:35.248698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.267530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.267555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:01.951 [2024-11-19 20:12:35.267564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:23:01.951 [2024-11-19 20:12:35.267570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.267602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.267609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:01.951 [2024-11-19 20:12:35.267619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:01.951 [2024-11-19 20:12:35.267625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.267683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.951 [2024-11-19 20:12:35.267690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:01.951 [2024-11-19 20:12:35.267699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:01.951 [2024-11-19 20:12:35.267705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.951 [2024-11-19 20:12:35.268368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3982.455 ms, result 0 00:23:01.951 { 00:23:01.951 "name": "ftl0", 00:23:01.951 "uuid": "29e552cc-be1f-4a80-a058-31f7183132d9" 00:23:01.951 } 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:01.951 /dev/nbd0 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:01.951 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:01.952 1+0 records in 00:23:01.952 1+0 records out 00:23:01.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237289 s, 17.3 MB/s 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:01.952 20:12:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:02.210 [2024-11-19 20:12:35.781393] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:02.210 [2024-11-19 20:12:35.781506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77661 ] 00:23:02.210 [2024-11-19 20:12:35.939602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.468 [2024-11-19 20:12:36.045462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.843  [2024-11-19T20:12:38.598Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-19T20:12:39.532Z] Copying: 419/1024 [MB] (225 MBps) [2024-11-19T20:12:40.466Z] Copying: 679/1024 [MB] (259 MBps) [2024-11-19T20:12:40.725Z] Copying: 932/1024 [MB] (252 MBps) [2024-11-19T20:12:41.290Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:23:07.496 00:23:07.496 20:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:10.026 20:12:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:10.026 [2024-11-19 20:12:43.489442] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:10.026 [2024-11-19 20:12:43.489767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77741 ] 00:23:10.026 [2024-11-19 20:12:43.660336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.027 [2024-11-19 20:12:43.770607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.412  [2024-11-19T20:12:46.150Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-19T20:12:47.087Z] Copying: 21416/1048576 [kB] (7992 kBps) [2024-11-19T20:12:48.029Z] Copying: 43/1024 [MB] (22 MBps) [2024-11-19T20:12:49.415Z] Copying: 58/1024 [MB] (15 MBps) [2024-11-19T20:12:50.358Z] Copying: 76/1024 [MB] (17 MBps) [2024-11-19T20:12:51.298Z] Copying: 91/1024 [MB] (14 MBps) [2024-11-19T20:12:52.391Z] Copying: 111/1024 [MB] (20 MBps) [2024-11-19T20:12:53.336Z] Copying: 125/1024 [MB] (13 MBps) [2024-11-19T20:12:54.277Z] Copying: 141/1024 [MB] (16 MBps) [2024-11-19T20:12:55.221Z] Copying: 156/1024 [MB] (14 MBps) [2024-11-19T20:12:56.157Z] Copying: 174/1024 [MB] (18 MBps) [2024-11-19T20:12:57.101Z] Copying: 198/1024 [MB] (24 MBps) [2024-11-19T20:12:58.047Z] Copying: 215/1024 [MB] (16 MBps) [2024-11-19T20:12:59.431Z] Copying: 228/1024 [MB] (13 MBps) [2024-11-19T20:13:00.375Z] Copying: 245/1024 [MB] (17 MBps) [2024-11-19T20:13:01.310Z] Copying: 263/1024 [MB] (17 MBps) [2024-11-19T20:13:02.252Z] Copying: 286/1024 [MB] (23 MBps) [2024-11-19T20:13:03.191Z] Copying: 310/1024 [MB] (23 MBps) [2024-11-19T20:13:04.133Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-19T20:13:05.074Z] Copying: 359/1024 [MB] (23 MBps) [2024-11-19T20:13:06.014Z] Copying: 378/1024 [MB] (19 MBps) [2024-11-19T20:13:07.389Z] Copying: 396/1024 [MB] (18 MBps) [2024-11-19T20:13:08.328Z] Copying: 432/1024 [MB] (35 MBps) [2024-11-19T20:13:09.271Z] Copying: 451/1024 [MB] (18 MBps) [2024-11-19T20:13:10.210Z] Copying: 469/1024 [MB] (17 MBps) [2024-11-19T20:13:11.155Z] Copying: 487/1024 [MB] (18 MBps) [2024-11-19T20:13:12.099Z] Copying: 503/1024 [MB] (16 MBps) [2024-11-19T20:13:13.042Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-19T20:13:14.430Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-19T20:13:15.371Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-19T20:13:16.317Z] Copying: 557/1024 [MB] (17 MBps) [2024-11-19T20:13:17.261Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-19T20:13:18.205Z] Copying: 586/1024 [MB] (12 MBps) [2024-11-19T20:13:19.148Z] Copying: 600/1024 [MB] (14 MBps) [2024-11-19T20:13:20.090Z] Copying: 614/1024 [MB] (14 MBps) [2024-11-19T20:13:21.035Z] Copying: 628/1024 [MB] (13 MBps) [2024-11-19T20:13:22.413Z] Copying: 641/1024 [MB] (12 MBps) [2024-11-19T20:13:23.353Z] Copying: 669/1024 [MB] (28 MBps) [2024-11-19T20:13:24.286Z] Copying: 691/1024 [MB] (21 MBps) [2024-11-19T20:13:25.224Z] Copying: 720/1024 [MB] (28 MBps) [2024-11-19T20:13:26.166Z] Copying: 747/1024 [MB] (26 MBps) [2024-11-19T20:13:27.107Z] Copying: 763/1024 [MB] (15 MBps) [2024-11-19T20:13:28.043Z] Copying: 778/1024 [MB] (15 MBps) [2024-11-19T20:13:29.426Z] Copying: 795/1024 [MB] (17 MBps) [2024-11-19T20:13:30.359Z] Copying: 810/1024 [MB] (15 MBps) [2024-11-19T20:13:31.294Z] Copying: 841/1024 [MB] (30 MBps) [2024-11-19T20:13:32.229Z] Copying: 875/1024 [MB] (34 MBps) [2024-11-19T20:13:33.163Z] Copying: 911/1024 [MB] (35 MBps) [2024-11-19T20:13:34.151Z] Copying: 947/1024 [MB] (35 MBps) [2024-11-19T20:13:35.125Z] Copying: 982/1024 [MB] (35 MBps) [2024-11-19T20:13:35.383Z] Copying: 1018/1024 [MB] (35 MBps) [2024-11-19T20:13:35.951Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:24:02.157 00:24:02.157 20:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:02.157 20:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:02.417 20:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:02.417 [2024-11-19 20:13:36.099150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.417 [2024-11-19 20:13:36.099194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.418 [2024-11-19 20:13:36.099206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.418 [2024-11-19 20:13:36.099215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.099246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.418 [2024-11-19 20:13:36.101458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.101607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.418 [2024-11-19 20:13:36.101625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:24:02.418 [2024-11-19 20:13:36.101632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.103515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.103543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.418 [2024-11-19 20:13:36.103552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:24:02.418 [2024-11-19 20:13:36.103559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.119450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.119481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.418 [2024-11-19 20:13:36.119491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.873 ms 00:24:02.418 [2024-11-19 20:13:36.119498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.124168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.124190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.418 [2024-11-19 20:13:36.124201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:24:02.418 [2024-11-19 20:13:36.124208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.143722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.143830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.418 [2024-11-19 20:13:36.143846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.460 ms 00:24:02.418 [2024-11-19 20:13:36.143852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.156235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.156267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.418 [2024-11-19 20:13:36.156279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.352 ms 00:24:02.418 [2024-11-19 20:13:36.156295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.156402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.156411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.418 [2024-11-19 20:13:36.156420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:02.418 [2024-11-19 20:13:36.156427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.175367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.175393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.418 [2024-11-19 20:13:36.175402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.925 ms 00:24:02.418 [2024-11-19 20:13:36.175408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.418 [2024-11-19 20:13:36.193492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.418 [2024-11-19 20:13:36.193517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.418 [2024-11-19 20:13:36.193526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.053 ms 00:24:02.418 [2024-11-19 20:13:36.193532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.679 [2024-11-19 20:13:36.211404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.680 [2024-11-19 20:13:36.211428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.680 [2024-11-19 20:13:36.211437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.840 ms 00:24:02.680 [2024-11-19 20:13:36.211443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.680 [2024-11-19 20:13:36.228817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.680 [2024-11-19 20:13:36.228842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.680 [2024-11-19 20:13:36.228851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:24:02.680 [2024-11-19 20:13:36.228857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.680 [2024-11-19 20:13:36.228886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.680 [2024-11-19 20:13:36.228898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.228995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.680 [2024-11-19 20:13:36.229471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.681 [2024-11-19 20:13:36.229621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.681 [2024-11-19 20:13:36.229629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29e552cc-be1f-4a80-a058-31f7183132d9 00:24:02.681 [2024-11-19 20:13:36.229635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:02.681 [2024-11-19 20:13:36.229644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.681 [2024-11-19 20:13:36.229650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.681 [2024-11-19 20:13:36.229660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.681 [2024-11-19 20:13:36.229667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.681 [2024-11-19 20:13:36.229674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.681 [2024-11-19 20:13:36.229680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.681 [2024-11-19 20:13:36.229686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.681 [2024-11-19 20:13:36.229691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.681 [2024-11-19 20:13:36.229698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.681 [2024-11-19 20:13:36.229704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.681 [2024-11-19 20:13:36.229712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:24:02.681 [2024-11-19 20:13:36.229717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.240004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.681 [2024-11-19 20:13:36.240028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.681 [2024-11-19 20:13:36.240040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.261 ms 00:24:02.681 [2024-11-19 20:13:36.240046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.240363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.681 [2024-11-19 20:13:36.240376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.681 [2024-11-19 20:13:36.240384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:02.681 [2024-11-19 20:13:36.240390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.275246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.275274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.681 [2024-11-19 20:13:36.275284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.275291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.275339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.275346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.681 [2024-11-19 20:13:36.275353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.275360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.275418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.275427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.681 [2024-11-19 20:13:36.275437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.275442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.275459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.275465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.681 [2024-11-19 20:13:36.275472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.275478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.338551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.338684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.681 [2024-11-19 20:13:36.338701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.338708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.681 [2024-11-19 20:13:36.390372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.681 [2024-11-19 20:13:36.390530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.681 [2024-11-19 20:13:36.390600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.681 [2024-11-19 20:13:36.390703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.681 [2024-11-19 20:13:36.390753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.681 [2024-11-19 20:13:36.390809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.390859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.681 [2024-11-19 20:13:36.390867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.681 [2024-11-19 20:13:36.390875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.681 [2024-11-19 20:13:36.390882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.681 [2024-11-19 20:13:36.391006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 291.817 ms, result 0 00:24:02.681 true 00:24:02.681 20:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77517 00:24:02.681 20:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77517 00:24:02.681 20:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:02.943 [2024-11-19 20:13:36.484832] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:02.943 [2024-11-19 20:13:36.485133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78294 ] 00:24:02.943 [2024-11-19 20:13:36.644143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.203 [2024-11-19 20:13:36.738638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.147  [2024-11-19T20:13:39.331Z] Copying: 253/1024 [MB] (253 MBps) [2024-11-19T20:13:40.273Z] Copying: 511/1024 [MB] (258 MBps) [2024-11-19T20:13:41.218Z] Copying: 770/1024 [MB] (258 MBps) [2024-11-19T20:13:41.218Z] Copying: 1023/1024 [MB] (253 MBps) [2024-11-19T20:13:41.790Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:24:07.996 00:24:07.996 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77517 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:07.996 20:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:07.996 [2024-11-19 20:13:41.610311] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:07.996 [2024-11-19 20:13:41.611043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78350 ] 00:24:07.996 [2024-11-19 20:13:41.765348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.257 [2024-11-19 20:13:41.861373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.518 [2024-11-19 20:13:42.092330] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.518 [2024-11-19 20:13:42.092503] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.519 [2024-11-19 20:13:42.155835] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:08.519 [2024-11-19 20:13:42.156284] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:08.519 [2024-11-19 20:13:42.156852] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:08.781 [2024-11-19 20:13:42.509085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.781 [2024-11-19 20:13:42.509142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.781 [2024-11-19 20:13:42.509159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:08.781 [2024-11-19 20:13:42.509168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.781 [2024-11-19 20:13:42.509254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.781 [2024-11-19 20:13:42.509266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.781 [2024-11-19 20:13:42.509277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:08.781 [2024-11-19 20:13:42.509285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.781 [2024-11-19 20:13:42.509308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.781 [2024-11-19 20:13:42.510075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.781 [2024-11-19 20:13:42.510094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.781 [2024-11-19 20:13:42.510104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.782 [2024-11-19 20:13:42.510113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:24:08.782 [2024-11-19 20:13:42.510121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.512412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.782 [2024-11-19 20:13:42.527522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.527756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.782 [2024-11-19 20:13:42.527780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.112 ms 00:24:08.782 [2024-11-19 20:13:42.527791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.527964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.527994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.782 [2024-11-19 20:13:42.528005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:08.782 [2024-11-19 20:13:42.528013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.539768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.539817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.782 [2024-11-19 20:13:42.539828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.666 ms 00:24:08.782 [2024-11-19 20:13:42.539838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.539927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.539937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.782 [2024-11-19 20:13:42.539947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:08.782 [2024-11-19 20:13:42.539955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.540019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.540035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.782 [2024-11-19 20:13:42.540044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:08.782 [2024-11-19 20:13:42.540052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.540080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.782 [2024-11-19 20:13:42.544723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.544765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.782 [2024-11-19 20:13:42.544777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:24:08.782 [2024-11-19 20:13:42.544785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.544824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.544832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.782 [2024-11-19 20:13:42.544842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:08.782 [2024-11-19 20:13:42.544850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.544891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.782 [2024-11-19 20:13:42.544923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.782 [2024-11-19 20:13:42.544967] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.782 [2024-11-19 20:13:42.544985] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:08.782 [2024-11-19 20:13:42.545099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.782 [2024-11-19 20:13:42.545113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.782 [2024-11-19 20:13:42.545126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:08.782 [2024-11-19 20:13:42.545138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545153] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545162] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.782 [2024-11-19 20:13:42.545171] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.782 [2024-11-19 20:13:42.545180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.782 [2024-11-19 20:13:42.545187] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.782 [2024-11-19 20:13:42.545197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.545206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.782 [2024-11-19 20:13:42.545215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:08.782 [2024-11-19 20:13:42.545256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.545343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.782 [2024-11-19 20:13:42.545356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.782 [2024-11-19 20:13:42.545365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:08.782 [2024-11-19 20:13:42.545372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.782 [2024-11-19 20:13:42.545481] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.782 [2024-11-19 20:13:42.545494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.782 [2024-11-19 20:13:42.545504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.782 [2024-11-19 20:13:42.545532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.782 [2024-11-19 20:13:42.545555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.782 [2024-11-19 20:13:42.545569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.782 [2024-11-19 20:13:42.545585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.782 [2024-11-19 20:13:42.545592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.782 [2024-11-19 20:13:42.545599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.782 [2024-11-19 20:13:42.545611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.782 [2024-11-19 20:13:42.545618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.782 [2024-11-19 20:13:42.545633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.782 [2024-11-19 20:13:42.545654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.782 [2024-11-19 20:13:42.545674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.782 [2024-11-19 20:13:42.545695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.782 [2024-11-19 20:13:42.545716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.782 [2024-11-19 20:13:42.545739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.782 [2024-11-19 20:13:42.545754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.782 [2024-11-19 20:13:42.545761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.782 [2024-11-19 20:13:42.545768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.782 [2024-11-19 20:13:42.545774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.782 [2024-11-19 20:13:42.545780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.782 [2024-11-19 20:13:42.545786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.782 [2024-11-19 20:13:42.545798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.782 [2024-11-19 20:13:42.545805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545812] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.782 [2024-11-19 20:13:42.545821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.782 [2024-11-19 20:13:42.545828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.782 [2024-11-19 20:13:42.545841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.782 [2024-11-19 20:13:42.545850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.783 [2024-11-19 20:13:42.545858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.783 [2024-11-19 20:13:42.545865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.783 [2024-11-19 20:13:42.545872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.783 [2024-11-19 20:13:42.545879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.783 [2024-11-19 20:13:42.545886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.783 [2024-11-19 20:13:42.545895] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.783 [2024-11-19 20:13:42.545905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.545914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.783 [2024-11-19 20:13:42.545921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.783 [2024-11-19 20:13:42.545935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.783 [2024-11-19 20:13:42.545943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.783 [2024-11-19 20:13:42.545950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.783 [2024-11-19 20:13:42.545957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.783 [2024-11-19 20:13:42.545964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.783 [2024-11-19 20:13:42.545972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.783 [2024-11-19 20:13:42.545978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.783 [2024-11-19 20:13:42.545985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.545992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.545999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.546006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.546013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.783 [2024-11-19 20:13:42.546020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.783 [2024-11-19 20:13:42.546028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.546036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.783 [2024-11-19 20:13:42.546043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.783 [2024-11-19 20:13:42.546050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.783 [2024-11-19 20:13:42.546059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.783 [2024-11-19 20:13:42.546066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.783 [2024-11-19 20:13:42.546073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.783 [2024-11-19 20:13:42.546081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:08.783 [2024-11-19 20:13:42.546090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.045 [2024-11-19 20:13:42.584757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.045 [2024-11-19 20:13:42.584814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.045 [2024-11-19 20:13:42.584828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.620 ms 00:24:09.045 [2024-11-19 20:13:42.584837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.045 [2024-11-19 20:13:42.584933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.045 [2024-11-19 20:13:42.584948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:09.045 [2024-11-19 20:13:42.584959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:09.045 [2024-11-19 20:13:42.584968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.045 [2024-11-19 20:13:42.633528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.633825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.046 [2024-11-19 20:13:42.633850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.489 ms 00:24:09.046 [2024-11-19 20:13:42.633866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.633927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.633940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.046 [2024-11-19 20:13:42.633951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:09.046 [2024-11-19 20:13:42.633959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.634810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.634838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.046 [2024-11-19 20:13:42.634851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:24:09.046 [2024-11-19 20:13:42.634861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.635069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.635091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.046 [2024-11-19 20:13:42.635102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:24:09.046 [2024-11-19 20:13:42.635112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.653552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.653596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.046 [2024-11-19 20:13:42.653609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:24:09.046 [2024-11-19 20:13:42.653619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.668958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:09.046 [2024-11-19 20:13:42.669170] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:09.046 [2024-11-19 20:13:42.669191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.669202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:09.046 [2024-11-19 20:13:42.669213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.440 ms 00:24:09.046 [2024-11-19 20:13:42.669240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.703624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.703680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:09.046 [2024-11-19 20:13:42.703707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.077 ms 00:24:09.046 [2024-11-19 20:13:42.703717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.716752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.716798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:09.046 [2024-11-19 20:13:42.716811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.972 ms 00:24:09.046 [2024-11-19 20:13:42.716819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.729649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.729689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:09.046 [2024-11-19 20:13:42.729704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.777 ms 00:24:09.046 [2024-11-19 20:13:42.729713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.730429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.730464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:09.046 [2024-11-19 20:13:42.730477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:24:09.046 [2024-11-19 20:13:42.730487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.804511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.804565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:09.046 [2024-11-19 20:13:42.804580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.004 ms 00:24:09.046 [2024-11-19 20:13:42.804590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.816152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:09.046 [2024-11-19 20:13:42.819526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.819568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:09.046 [2024-11-19 20:13:42.819582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.882 ms 00:24:09.046 [2024-11-19 20:13:42.819592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.819682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.819694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:09.046 [2024-11-19 20:13:42.819705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:09.046 [2024-11-19 20:13:42.819715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.819800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.819814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:09.046 [2024-11-19 20:13:42.819824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:09.046 [2024-11-19 20:13:42.819833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.819856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.819870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:09.046 [2024-11-19 20:13:42.819881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:09.046 [2024-11-19 20:13:42.819889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.046 [2024-11-19 20:13:42.819931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:09.046 [2024-11-19 20:13:42.819944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.046 [2024-11-19 20:13:42.819954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:09.046 [2024-11-19 20:13:42.819964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:09.046 [2024-11-19 20:13:42.819973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-11-19 20:13:42.846795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-11-19 20:13:42.846847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:09.308 [2024-11-19 20:13:42.846861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.793 ms 00:24:09.308 [2024-11-19 20:13:42.846871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-11-19 20:13:42.846974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-11-19 20:13:42.846986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:09.308 [2024-11-19 20:13:42.846996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:09.308 [2024-11-19 20:13:42.847006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-11-19 20:13:42.848617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.946 ms, result 0 00:24:10.253  [2024-11-19T20:13:44.994Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-19T20:13:45.939Z] Copying: 29/1024 [MB] (10 MBps) [2024-11-19T20:13:46.883Z] Copying: 40/1024 [MB] (11 MBps) [2024-11-19T20:13:48.272Z] Copying: 51/1024 [MB] (10 MBps) [2024-11-19T20:13:49.218Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-19T20:13:50.163Z] Copying: 73/1024 [MB] (11 MBps) [2024-11-19T20:13:51.114Z] Copying: 84/1024 [MB] (11 MBps) [2024-11-19T20:13:52.058Z] Copying: 95/1024 [MB] (11 MBps) [2024-11-19T20:13:53.002Z] Copying: 107/1024 [MB] (11 MBps) [2024-11-19T20:13:53.943Z] Copying: 118/1024 [MB] (11 MBps) [2024-11-19T20:13:54.886Z] Copying: 130/1024 [MB] (11 MBps) [2024-11-19T20:13:56.275Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-19T20:13:57.219Z] Copying: 152/1024 [MB] (11 MBps) [2024-11-19T20:13:58.161Z] Copying: 162/1024 [MB] (10 MBps) [2024-11-19T20:13:59.107Z] Copying: 173/1024 [MB] (11 MBps) [2024-11-19T20:14:00.050Z] Copying: 184/1024 [MB] (10 MBps) [2024-11-19T20:14:01.082Z] Copying: 195/1024 [MB] (11 MBps) [2024-11-19T20:14:02.026Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-19T20:14:02.968Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-19T20:14:03.907Z] Copying: 227/1024 [MB] (10 MBps) [2024-11-19T20:14:05.293Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-19T20:14:05.865Z] Copying: 250/1024 [MB] (11 MBps) [2024-11-19T20:14:07.252Z] Copying: 262/1024 [MB] (11 MBps) [2024-11-19T20:14:08.197Z] Copying: 273/1024 [MB] (11 MBps) [2024-11-19T20:14:09.141Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-19T20:14:10.094Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-19T20:14:11.038Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-19T20:14:11.983Z] Copying: 319/1024 [MB] (11 MBps) [2024-11-19T20:14:12.929Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-19T20:14:13.874Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-19T20:14:15.263Z] Copying: 352/1024 [MB] (11 MBps) [2024-11-19T20:14:16.203Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-19T20:14:17.146Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-19T20:14:18.090Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-19T20:14:19.034Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-19T20:14:19.978Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-19T20:14:20.923Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-19T20:14:21.869Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-19T20:14:23.258Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-19T20:14:24.202Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-19T20:14:25.147Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-19T20:14:26.091Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-19T20:14:27.035Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-19T20:14:27.979Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-19T20:14:28.923Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-19T20:14:29.866Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-19T20:14:31.252Z] Copying: 535/1024 [MB] (11 MBps) [2024-11-19T20:14:32.196Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-19T20:14:33.139Z] Copying: 558/1024 [MB] (11 MBps) [2024-11-19T20:14:34.086Z] Copying: 569/1024 [MB] (11 MBps) [2024-11-19T20:14:35.062Z] Copying: 580/1024 [MB] (11 MBps) [2024-11-19T20:14:36.006Z] Copying: 592/1024 [MB] (11 MBps) [2024-11-19T20:14:36.950Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-19T20:14:37.892Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-19T20:14:39.278Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-19T20:14:40.221Z] Copying: 638/1024 [MB] (11 MBps) [2024-11-19T20:14:41.163Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-19T20:14:42.108Z] Copying: 661/1024 [MB] (11 MBps) [2024-11-19T20:14:43.051Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-19T20:14:43.995Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-19T20:14:44.941Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-19T20:14:45.883Z] Copying: 720648/1048576 [kB] (9928 kBps) [2024-11-19T20:14:47.272Z] Copying: 715/1024 [MB] (11 MBps) [2024-11-19T20:14:48.217Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-19T20:14:49.162Z] Copying: 737/1024 [MB] (11 MBps) [2024-11-19T20:14:50.101Z] Copying: 749/1024 [MB] (12 MBps) [2024-11-19T20:14:51.045Z] Copying: 761/1024 [MB] (11 MBps) [2024-11-19T20:14:51.990Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-19T20:14:52.936Z] Copying: 784/1024 [MB] (10 MBps) [2024-11-19T20:14:53.879Z] Copying: 812864/1048576 [kB] (10040 kBps) [2024-11-19T20:14:55.264Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-19T20:14:56.208Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-19T20:14:57.154Z] Copying: 827/1024 [MB] (11 MBps) [2024-11-19T20:14:58.098Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-19T20:14:59.041Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-19T20:14:59.985Z] Copying: 863/1024 [MB] (16 MBps) [2024-11-19T20:15:00.931Z] Copying: 873/1024 [MB] (10 MBps) [2024-11-19T20:15:01.875Z] Copying: 904900/1048576 [kB] (9948 kBps) [2024-11-19T20:15:03.262Z] Copying: 893/1024 [MB] (10 MBps) [2024-11-19T20:15:04.207Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-19T20:15:05.151Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-19T20:15:06.155Z] Copying: 927/1024 [MB] (11 MBps) [2024-11-19T20:15:07.100Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-19T20:15:08.045Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-19T20:15:08.988Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-19T20:15:09.934Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-19T20:15:10.880Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-19T20:15:12.269Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-19T20:15:13.209Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-19T20:15:13.779Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-19T20:15:13.779Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-19 20:15:13.562782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.563116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:39.985 [2024-11-19 20:15:13.563150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:39.985 [2024-11-19 20:15:13.563160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.567259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:39.985 [2024-11-19 20:15:13.572697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.572760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:39.985 [2024-11-19 20:15:13.572775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:25:39.985 [2024-11-19 20:15:13.572784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.584508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.584570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:39.985 [2024-11-19 20:15:13.584585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.658 ms 00:25:39.985 [2024-11-19 20:15:13.584594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.609266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.609321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:39.985 [2024-11-19 20:15:13.609335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.653 ms 00:25:39.985 [2024-11-19 20:15:13.609343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.615555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.615611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:39.985 [2024-11-19 20:15:13.615623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:25:39.985 [2024-11-19 20:15:13.615631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.643665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.643718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:39.985 [2024-11-19 20:15:13.643733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.986 ms 00:25:39.985 [2024-11-19 20:15:13.643741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-19 20:15:13.659777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-19 20:15:13.659825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:39.985 [2024-11-19 20:15:13.659840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:25:39.985 [2024-11-19 20:15:13.659848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.245 [2024-11-19 20:15:13.953723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.245 [2024-11-19 20:15:13.953797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.245 [2024-11-19 20:15:13.953812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 293.818 ms 00:25:40.245 [2024-11-19 20:15:13.953829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.245 [2024-11-19 20:15:13.980696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.245 [2024-11-19 20:15:13.980747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:40.245 [2024-11-19 20:15:13.980761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.850 ms 00:25:40.245 [2024-11-19 20:15:13.980768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.245 [2024-11-19 20:15:14.006504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.245 [2024-11-19 20:15:14.006555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:40.245 [2024-11-19 20:15:14.006568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.686 ms 00:25:40.245 [2024-11-19 20:15:14.006577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.245 [2024-11-19 20:15:14.031837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.245 [2024-11-19 20:15:14.031885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.245 [2024-11-19 20:15:14.031898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.211 ms 00:25:40.245 [2024-11-19 20:15:14.031908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.507 [2024-11-19 20:15:14.057816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.507 [2024-11-19 20:15:14.057864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.507 [2024-11-19 20:15:14.057877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.828 ms 00:25:40.507 [2024-11-19 20:15:14.057885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.507 [2024-11-19 20:15:14.057932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.507 [2024-11-19 20:15:14.057949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107008 / 261120 wr_cnt: 1 state: open 00:25:40.507 [2024-11-19 20:15:14.057960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.057969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.057977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.057986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.057994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.507 [2024-11-19 20:15:14.058130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.508 [2024-11-19 20:15:14.058821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.508 [2024-11-19 20:15:14.058830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29e552cc-be1f-4a80-a058-31f7183132d9 00:25:40.508 [2024-11-19 20:15:14.058839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107008 00:25:40.508 [2024-11-19 20:15:14.058855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107968 00:25:40.508 [2024-11-19 20:15:14.058870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107008 00:25:40.508 [2024-11-19 20:15:14.058879] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:25:40.508 [2024-11-19 20:15:14.058886] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.508 [2024-11-19 20:15:14.058895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.508 [2024-11-19 20:15:14.058904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.508 [2024-11-19 20:15:14.058914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.508 [2024-11-19 20:15:14.058923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.508 [2024-11-19 20:15:14.058931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.508 [2024-11-19 20:15:14.058940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.508 [2024-11-19 20:15:14.058950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:25:40.509 [2024-11-19 20:15:14.058958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.072751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.509 [2024-11-19 20:15:14.072796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.509 [2024-11-19 20:15:14.072808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.753 ms 00:25:40.509 [2024-11-19 20:15:14.072816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.073203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.509 [2024-11-19 20:15:14.073242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.509 [2024-11-19 20:15:14.073254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:40.509 [2024-11-19 20:15:14.073263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.110275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.110329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.509 [2024-11-19 20:15:14.110342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.110352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.110426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.110437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.509 [2024-11-19 20:15:14.110447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.110470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.110555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.110568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.509 [2024-11-19 20:15:14.110577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.110585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.110602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.110610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.509 [2024-11-19 20:15:14.110618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.110626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.197279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.197339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.509 [2024-11-19 20:15:14.197353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.197362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.268536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.268595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.509 [2024-11-19 20:15:14.268607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.268616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.268705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.268715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.509 [2024-11-19 20:15:14.268725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.268733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.268772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.268781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.509 [2024-11-19 20:15:14.268791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.268799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.268896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.268913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.509 [2024-11-19 20:15:14.268922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.268930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.268962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.268971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.509 [2024-11-19 20:15:14.268980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.268988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.269029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.269044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.509 [2024-11-19 20:15:14.269054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.269062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.269110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.509 [2024-11-19 20:15:14.269121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.509 [2024-11-19 20:15:14.269129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.509 [2024-11-19 20:15:14.269138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.509 [2024-11-19 20:15:14.269313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 708.691 ms, result 0 00:25:41.894 00:25:41.894 00:25:41.894 20:15:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:44.432 20:15:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.432 [2024-11-19 20:15:17.845633] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:44.432 [2024-11-19 20:15:17.845766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79333 ] 00:25:44.432 [2024-11-19 20:15:18.008826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.432 [2024-11-19 20:15:18.115516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.694 [2024-11-19 20:15:18.410849] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.694 [2024-11-19 20:15:18.410928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.957 [2024-11-19 20:15:18.574123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.574189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:44.957 [2024-11-19 20:15:18.574210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:44.957 [2024-11-19 20:15:18.574234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.574291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.574302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.957 [2024-11-19 20:15:18.574314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:44.957 [2024-11-19 20:15:18.574322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.574344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:44.957 [2024-11-19 20:15:18.575044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:44.957 [2024-11-19 20:15:18.575075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.575084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.957 [2024-11-19 20:15:18.575094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:25:44.957 [2024-11-19 20:15:18.575102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.576991] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:44.957 [2024-11-19 20:15:18.591401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.591619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:44.957 [2024-11-19 20:15:18.591644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.412 ms 00:25:44.957 [2024-11-19 20:15:18.591654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.591776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.591790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:44.957 [2024-11-19 20:15:18.591800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:44.957 [2024-11-19 20:15:18.591809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.600313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.600356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.957 [2024-11-19 20:15:18.600368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.420 ms 00:25:44.957 [2024-11-19 20:15:18.600378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.600463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.600473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.957 [2024-11-19 20:15:18.600482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:44.957 [2024-11-19 20:15:18.600491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.600537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.600549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:44.957 [2024-11-19 20:15:18.600559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:44.957 [2024-11-19 20:15:18.600568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.600592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.957 [2024-11-19 20:15:18.604691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.604732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.957 [2024-11-19 20:15:18.604744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 00:25:44.957 [2024-11-19 20:15:18.604755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.604792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.604801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:44.957 [2024-11-19 20:15:18.604810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:44.957 [2024-11-19 20:15:18.604818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.604872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:44.957 [2024-11-19 20:15:18.604897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:44.957 [2024-11-19 20:15:18.604935] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:44.957 [2024-11-19 20:15:18.604955] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:44.957 [2024-11-19 20:15:18.605069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:44.957 [2024-11-19 20:15:18.605082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:44.957 [2024-11-19 20:15:18.605093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:44.957 [2024-11-19 20:15:18.605106] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:44.957 [2024-11-19 20:15:18.605116] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:44.957 [2024-11-19 20:15:18.605125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:44.957 [2024-11-19 20:15:18.605133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:44.957 [2024-11-19 20:15:18.605143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:44.957 [2024-11-19 20:15:18.605153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:44.957 [2024-11-19 20:15:18.605165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.605172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:44.957 [2024-11-19 20:15:18.605180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:25:44.957 [2024-11-19 20:15:18.605188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.605303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 20:15:18.605314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:44.957 [2024-11-19 20:15:18.605323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:44.957 [2024-11-19 20:15:18.605330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 20:15:18.605439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:44.957 [2024-11-19 20:15:18.605454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:44.957 [2024-11-19 20:15:18.605463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.957 [2024-11-19 20:15:18.605472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.957 [2024-11-19 20:15:18.605482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:44.957 [2024-11-19 20:15:18.605489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:44.957 [2024-11-19 20:15:18.605497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:44.957 [2024-11-19 20:15:18.605506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:44.958 [2024-11-19 20:15:18.605514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.958 [2024-11-19 20:15:18.605528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:44.958 [2024-11-19 20:15:18.605535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:44.958 [2024-11-19 20:15:18.605542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.958 [2024-11-19 20:15:18.605551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:44.958 [2024-11-19 20:15:18.605558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:44.958 [2024-11-19 20:15:18.605572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:44.958 [2024-11-19 20:15:18.605587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:44.958 [2024-11-19 20:15:18.605609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:44.958 [2024-11-19 20:15:18.605634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:44.958 [2024-11-19 20:15:18.605654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:44.958 [2024-11-19 20:15:18.605674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:44.958 [2024-11-19 20:15:18.605695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.958 [2024-11-19 20:15:18.605708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:44.958 [2024-11-19 20:15:18.605714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:44.958 [2024-11-19 20:15:18.605720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.958 [2024-11-19 20:15:18.605728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:44.958 [2024-11-19 20:15:18.605736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:44.958 [2024-11-19 20:15:18.605742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:44.958 [2024-11-19 20:15:18.605755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:44.958 [2024-11-19 20:15:18.605761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:44.958 [2024-11-19 20:15:18.605775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:44.958 [2024-11-19 20:15:18.605785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.958 [2024-11-19 20:15:18.605801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:44.958 [2024-11-19 20:15:18.605808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:44.958 [2024-11-19 20:15:18.605814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:44.958 [2024-11-19 20:15:18.605821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:44.958 [2024-11-19 20:15:18.605830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:44.958 [2024-11-19 20:15:18.605838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:44.958 [2024-11-19 20:15:18.605847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:44.958 [2024-11-19 20:15:18.605856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:44.958 [2024-11-19 20:15:18.605872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:44.958 [2024-11-19 20:15:18.605879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:44.958 [2024-11-19 20:15:18.605886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:44.958 [2024-11-19 20:15:18.605893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:44.958 [2024-11-19 20:15:18.605899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:44.958 [2024-11-19 20:15:18.605906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:44.958 [2024-11-19 20:15:18.605914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:44.958 [2024-11-19 20:15:18.605921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:44.958 [2024-11-19 20:15:18.605927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:44.958 [2024-11-19 20:15:18.605965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:44.958 [2024-11-19 20:15:18.605976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:44.958 [2024-11-19 20:15:18.605992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:44.958 [2024-11-19 20:15:18.605999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:44.958 [2024-11-19 20:15:18.606006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:44.958 [2024-11-19 20:15:18.606014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.958 [2024-11-19 20:15:18.606023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:44.958 [2024-11-19 20:15:18.606033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:25:44.958 [2024-11-19 20:15:18.606041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.958 [2024-11-19 20:15:18.638453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.958 [2024-11-19 20:15:18.638677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.958 [2024-11-19 20:15:18.638697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.366 ms 00:25:44.958 [2024-11-19 20:15:18.638706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.958 [2024-11-19 20:15:18.638803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.958 [2024-11-19 20:15:18.638812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:44.958 [2024-11-19 20:15:18.638821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:44.958 [2024-11-19 20:15:18.638829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.958 [2024-11-19 20:15:18.681710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.958 [2024-11-19 20:15:18.681768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.958 [2024-11-19 20:15:18.681783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.821 ms 00:25:44.958 [2024-11-19 20:15:18.681792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.958 [2024-11-19 20:15:18.681845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.958 [2024-11-19 20:15:18.681855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.959 [2024-11-19 20:15:18.681865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.959 [2024-11-19 20:15:18.681878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.959 [2024-11-19 20:15:18.682553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.959 [2024-11-19 20:15:18.682580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.959 [2024-11-19 20:15:18.682592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:25:44.959 [2024-11-19 20:15:18.682600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.959 [2024-11-19 20:15:18.682768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.959 [2024-11-19 20:15:18.682779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.959 [2024-11-19 20:15:18.682789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:44.959 [2024-11-19 20:15:18.682803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.959 [2024-11-19 20:15:18.698817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.959 [2024-11-19 20:15:18.698863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.959 [2024-11-19 20:15:18.698879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.993 ms 00:25:44.959 [2024-11-19 20:15:18.698889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.959 [2024-11-19 20:15:18.713747] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:44.959 [2024-11-19 20:15:18.713799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:44.959 [2024-11-19 20:15:18.713814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.959 [2024-11-19 20:15:18.713823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:44.959 [2024-11-19 20:15:18.713834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.808 ms 00:25:44.959 [2024-11-19 20:15:18.713842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.959 [2024-11-19 20:15:18.739857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.959 [2024-11-19 20:15:18.739914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:44.959 [2024-11-19 20:15:18.739926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.958 ms 00:25:44.959 [2024-11-19 20:15:18.739934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.753257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.753470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:45.221 [2024-11-19 20:15:18.753492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.261 ms 00:25:45.221 [2024-11-19 20:15:18.753500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.766578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.766632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:45.221 [2024-11-19 20:15:18.766646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.898 ms 00:25:45.221 [2024-11-19 20:15:18.766655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.767337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.767369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.221 [2024-11-19 20:15:18.767380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:25:45.221 [2024-11-19 20:15:18.767392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.833322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.833386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:45.221 [2024-11-19 20:15:18.833410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.908 ms 00:25:45.221 [2024-11-19 20:15:18.833419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.844788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.221 [2024-11-19 20:15:18.848091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.848143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.221 [2024-11-19 20:15:18.848157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.609 ms 00:25:45.221 [2024-11-19 20:15:18.848166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.848284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.848297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:45.221 [2024-11-19 20:15:18.848308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:45.221 [2024-11-19 20:15:18.848320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.850127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.850183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.221 [2024-11-19 20:15:18.850195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:25:45.221 [2024-11-19 20:15:18.850204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.850249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.850259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.221 [2024-11-19 20:15:18.850268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:45.221 [2024-11-19 20:15:18.850276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.221 [2024-11-19 20:15:18.850321] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:45.221 [2024-11-19 20:15:18.850337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.221 [2024-11-19 20:15:18.850347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:45.222 [2024-11-19 20:15:18.850356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:45.222 [2024-11-19 20:15:18.850365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.222 [2024-11-19 20:15:18.876709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.222 [2024-11-19 20:15:18.876761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.222 [2024-11-19 20:15:18.876776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.326 ms 00:25:45.222 [2024-11-19 20:15:18.876792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.222 [2024-11-19 20:15:18.876887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.222 [2024-11-19 20:15:18.876898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.222 [2024-11-19 20:15:18.876908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:45.222 [2024-11-19 20:15:18.876916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.222 [2024-11-19 20:15:18.878212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.592 ms, result 0 00:25:46.615  [2024-11-19T20:15:21.353Z] Copying: 1052/1048576 [kB] (1052 kBps) [2024-11-19T20:15:22.295Z] Copying: 4516/1048576 [kB] (3464 kBps) [2024-11-19T20:15:23.235Z] Copying: 20/1024 [MB] (16 MBps) [2024-11-19T20:15:24.173Z] Copying: 45/1024 [MB] (24 MBps) [2024-11-19T20:15:25.115Z] Copying: 71/1024 [MB] (26 MBps) [2024-11-19T20:15:26.501Z] Copying: 108/1024 [MB] (36 MBps) [2024-11-19T20:15:27.074Z] Copying: 136/1024 [MB] (28 MBps) [2024-11-19T20:15:28.453Z] Copying: 160/1024 [MB] (24 MBps) [2024-11-19T20:15:29.395Z] Copying: 190/1024 [MB] (30 MBps) [2024-11-19T20:15:30.335Z] Copying: 221/1024 [MB] (30 MBps) [2024-11-19T20:15:31.277Z] Copying: 247/1024 [MB] (26 MBps) [2024-11-19T20:15:32.219Z] Copying: 268/1024 [MB] (21 MBps) [2024-11-19T20:15:33.162Z] Copying: 295/1024 [MB] (27 MBps) [2024-11-19T20:15:34.102Z] Copying: 311/1024 [MB] (15 MBps) [2024-11-19T20:15:35.487Z] Copying: 343/1024 [MB] (32 MBps) [2024-11-19T20:15:36.428Z] Copying: 371/1024 [MB] (27 MBps) [2024-11-19T20:15:37.368Z] Copying: 393/1024 [MB] (22 MBps) [2024-11-19T20:15:38.363Z] Copying: 419/1024 [MB] (26 MBps) [2024-11-19T20:15:39.304Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-19T20:15:40.244Z] Copying: 467/1024 [MB] (20 MBps) [2024-11-19T20:15:41.188Z] Copying: 498/1024 [MB] (30 MBps) [2024-11-19T20:15:42.130Z] Copying: 529/1024 [MB] (30 MBps) [2024-11-19T20:15:43.076Z] Copying: 559/1024 [MB] (30 MBps) [2024-11-19T20:15:44.460Z] Copying: 589/1024 [MB] (29 MBps) [2024-11-19T20:15:45.403Z] Copying: 617/1024 [MB] (27 MBps) [2024-11-19T20:15:46.346Z] Copying: 647/1024 [MB] (30 MBps) [2024-11-19T20:15:47.291Z] Copying: 672/1024 [MB] (25 MBps) [2024-11-19T20:15:48.233Z] Copying: 702/1024 [MB] (29 MBps) [2024-11-19T20:15:49.177Z] Copying: 732/1024 [MB] (29 MBps) [2024-11-19T20:15:50.122Z] Copying: 757/1024 [MB] (25 MBps) [2024-11-19T20:15:51.503Z] Copying: 782/1024 [MB] (24 MBps) [2024-11-19T20:15:52.072Z] Copying: 810/1024 [MB] (28 MBps) [2024-11-19T20:15:53.460Z] Copying: 850/1024 [MB] (39 MBps) [2024-11-19T20:15:54.402Z] Copying: 869/1024 [MB] (19 MBps) [2024-11-19T20:15:55.344Z] Copying: 891/1024 [MB] (22 MBps) [2024-11-19T20:15:56.285Z] Copying: 915/1024 [MB] (23 MBps) [2024-11-19T20:15:57.231Z] Copying: 931/1024 [MB] (16 MBps) [2024-11-19T20:15:58.178Z] Copying: 955/1024 [MB] (23 MBps) [2024-11-19T20:15:59.123Z] Copying: 982/1024 [MB] (26 MBps) [2024-11-19T20:15:59.708Z] Copying: 1007/1024 [MB] (25 MBps) [2024-11-19T20:15:59.970Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 20:15:59.783388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.176 [2024-11-19 20:15:59.783560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:26.176 [2024-11-19 20:15:59.783593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:26.176 [2024-11-19 20:15:59.783604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.176 [2024-11-19 20:15:59.783633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:26.176 [2024-11-19 20:15:59.787895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.176 [2024-11-19 20:15:59.787945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:26.176 [2024-11-19 20:15:59.787961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.241 ms 00:26:26.176 [2024-11-19 20:15:59.787971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.176 [2024-11-19 20:15:59.788256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.176 [2024-11-19 20:15:59.788272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:26.176 [2024-11-19 20:15:59.788288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:26:26.177 [2024-11-19 20:15:59.788298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.803118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.803202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:26.177 [2024-11-19 20:15:59.803238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.793 ms 00:26:26.177 [2024-11-19 20:15:59.803250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.809532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.809585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:26.177 [2024-11-19 20:15:59.809599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:26:26.177 [2024-11-19 20:15:59.809620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.837165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.837415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:26.177 [2024-11-19 20:15:59.837442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.477 ms 00:26:26.177 [2024-11-19 20:15:59.837451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.853823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.853875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:26.177 [2024-11-19 20:15:59.853889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.327 ms 00:26:26.177 [2024-11-19 20:15:59.853898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.858320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.858371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:26.177 [2024-11-19 20:15:59.858383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.365 ms 00:26:26.177 [2024-11-19 20:15:59.858393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.884155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.884202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:26.177 [2024-11-19 20:15:59.884214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.738 ms 00:26:26.177 [2024-11-19 20:15:59.884243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.910294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.910346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:26.177 [2024-11-19 20:15:59.910373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.003 ms 00:26:26.177 [2024-11-19 20:15:59.910381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.935886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.936094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:26.177 [2024-11-19 20:15:59.936117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.455 ms 00:26:26.177 [2024-11-19 20:15:59.936126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.961331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.177 [2024-11-19 20:15:59.961381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:26.177 [2024-11-19 20:15:59.961393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.994 ms 00:26:26.177 [2024-11-19 20:15:59.961401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.177 [2024-11-19 20:15:59.961447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:26.177 [2024-11-19 20:15:59.961464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:26.177 [2024-11-19 20:15:59.961476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:26.177 [2024-11-19 20:15:59.961485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:26.177 [2024-11-19 20:15:59.961861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.961994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:26.178 [2024-11-19 20:15:59.962285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:26.178 [2024-11-19 20:15:59.962294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29e552cc-be1f-4a80-a058-31f7183132d9 00:26:26.178 [2024-11-19 20:15:59.962303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:26.178 [2024-11-19 20:15:59.962312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157632 00:26:26.178 [2024-11-19 20:15:59.962320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155648 00:26:26.178 [2024-11-19 20:15:59.962335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:26:26.178 [2024-11-19 20:15:59.962343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:26.178 [2024-11-19 20:15:59.962353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:26.178 [2024-11-19 20:15:59.962361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:26.178 [2024-11-19 20:15:59.962375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:26.178 [2024-11-19 20:15:59.962382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:26.178 [2024-11-19 20:15:59.962391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.178 [2024-11-19 20:15:59.962421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:26.178 [2024-11-19 20:15:59.962431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:26:26.178 [2024-11-19 20:15:59.962440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:15:59.976333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.439 [2024-11-19 20:15:59.976388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:26.439 [2024-11-19 20:15:59.976400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:26:26.439 [2024-11-19 20:15:59.976408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:15:59.976819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.439 [2024-11-19 20:15:59.976830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:26.439 [2024-11-19 20:15:59.976840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:26:26.439 [2024-11-19 20:15:59.976848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:16:00.013626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.439 [2024-11-19 20:16:00.013679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:26.439 [2024-11-19 20:16:00.013690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.439 [2024-11-19 20:16:00.013699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:16:00.013776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.439 [2024-11-19 20:16:00.013786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:26.439 [2024-11-19 20:16:00.013794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.439 [2024-11-19 20:16:00.013801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:16:00.013886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.439 [2024-11-19 20:16:00.013905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:26.439 [2024-11-19 20:16:00.013913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.439 [2024-11-19 20:16:00.013922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:16:00.013938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.439 [2024-11-19 20:16:00.013946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:26.439 [2024-11-19 20:16:00.013954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.439 [2024-11-19 20:16:00.013961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.439 [2024-11-19 20:16:00.099982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.439 [2024-11-19 20:16:00.100041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:26.439 [2024-11-19 20:16:00.100055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.439 [2024-11-19 20:16:00.100064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.169720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.169779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:26.440 [2024-11-19 20:16:00.169792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.169801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.169871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.169881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:26.440 [2024-11-19 20:16:00.169897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.169905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.169967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.169978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:26.440 [2024-11-19 20:16:00.169987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.169995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.170094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.170105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:26.440 [2024-11-19 20:16:00.170114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.170126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.170157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.170169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:26.440 [2024-11-19 20:16:00.170177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.170186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.170254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.170266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:26.440 [2024-11-19 20:16:00.170275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.170287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.170338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.440 [2024-11-19 20:16:00.170350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:26.440 [2024-11-19 20:16:00.170361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.440 [2024-11-19 20:16:00.170369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.440 [2024-11-19 20:16:00.170524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.086 ms, result 0 00:26:27.384 00:26:27.384 00:26:27.385 20:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:29.935 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:29.935 20:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:29.935 [2024-11-19 20:16:03.260811] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:29.935 [2024-11-19 20:16:03.261293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79795 ] 00:26:29.935 [2024-11-19 20:16:03.430141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.935 [2024-11-19 20:16:03.550964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.197 [2024-11-19 20:16:03.849620] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:30.197 [2024-11-19 20:16:03.849704] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:30.461 [2024-11-19 20:16:04.012164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.012447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:30.461 [2024-11-19 20:16:04.012486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:30.461 [2024-11-19 20:16:04.012496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.012570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.012583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.461 [2024-11-19 20:16:04.012596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:30.461 [2024-11-19 20:16:04.012605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.012629] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:30.461 [2024-11-19 20:16:04.013351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:30.461 [2024-11-19 20:16:04.013371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.013380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.461 [2024-11-19 20:16:04.013390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:26:30.461 [2024-11-19 20:16:04.013399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.015191] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:30.461 [2024-11-19 20:16:04.029842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.029895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:30.461 [2024-11-19 20:16:04.029910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.652 ms 00:26:30.461 [2024-11-19 20:16:04.029918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.030002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.030013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:30.461 [2024-11-19 20:16:04.030022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:30.461 [2024-11-19 20:16:04.030030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.038548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.038592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.461 [2024-11-19 20:16:04.038603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.436 ms 00:26:30.461 [2024-11-19 20:16:04.038612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.038703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.038713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.461 [2024-11-19 20:16:04.038721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:30.461 [2024-11-19 20:16:04.038729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.038777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.038787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:30.461 [2024-11-19 20:16:04.038796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:30.461 [2024-11-19 20:16:04.038804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.038827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:30.461 [2024-11-19 20:16:04.043089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.043131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.461 [2024-11-19 20:16:04.043143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:26:30.461 [2024-11-19 20:16:04.043154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.043190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.043199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:30.461 [2024-11-19 20:16:04.043208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:30.461 [2024-11-19 20:16:04.043218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.043289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:30.461 [2024-11-19 20:16:04.043315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:30.461 [2024-11-19 20:16:04.043353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:30.461 [2024-11-19 20:16:04.043373] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:30.461 [2024-11-19 20:16:04.043500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:30.461 [2024-11-19 20:16:04.043516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:30.461 [2024-11-19 20:16:04.043528] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:30.461 [2024-11-19 20:16:04.043539] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:30.461 [2024-11-19 20:16:04.043548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:30.461 [2024-11-19 20:16:04.043557] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:30.461 [2024-11-19 20:16:04.043565] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:30.461 [2024-11-19 20:16:04.043573] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:30.461 [2024-11-19 20:16:04.043582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:30.461 [2024-11-19 20:16:04.043594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.043602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:30.461 [2024-11-19 20:16:04.043610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:26:30.461 [2024-11-19 20:16:04.043618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.043712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.461 [2024-11-19 20:16:04.043726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:30.461 [2024-11-19 20:16:04.043734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:30.461 [2024-11-19 20:16:04.043742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.461 [2024-11-19 20:16:04.043848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:30.461 [2024-11-19 20:16:04.043869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:30.461 [2024-11-19 20:16:04.043877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.461 [2024-11-19 20:16:04.043886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.461 [2024-11-19 20:16:04.043895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:30.461 [2024-11-19 20:16:04.043902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:30.461 [2024-11-19 20:16:04.043909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:30.461 [2024-11-19 20:16:04.043916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:30.461 [2024-11-19 20:16:04.043923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:30.461 [2024-11-19 20:16:04.043930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.461 [2024-11-19 20:16:04.043937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:30.461 [2024-11-19 20:16:04.043943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:30.461 [2024-11-19 20:16:04.043950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.461 [2024-11-19 20:16:04.043960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:30.461 [2024-11-19 20:16:04.043968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:30.461 [2024-11-19 20:16:04.043982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.461 [2024-11-19 20:16:04.043989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:30.461 [2024-11-19 20:16:04.043996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:30.461 [2024-11-19 20:16:04.044003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.461 [2024-11-19 20:16:04.044010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:30.461 [2024-11-19 20:16:04.044017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:30.461 [2024-11-19 20:16:04.044024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.461 [2024-11-19 20:16:04.044030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:30.461 [2024-11-19 20:16:04.044036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:30.461 [2024-11-19 20:16:04.044043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.461 [2024-11-19 20:16:04.044050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:30.461 [2024-11-19 20:16:04.044056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:30.461 [2024-11-19 20:16:04.044063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.461 [2024-11-19 20:16:04.044070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:30.461 [2024-11-19 20:16:04.044076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:30.462 [2024-11-19 20:16:04.044083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.462 [2024-11-19 20:16:04.044090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:30.462 [2024-11-19 20:16:04.044097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:30.462 [2024-11-19 20:16:04.044103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.462 [2024-11-19 20:16:04.044109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:30.462 [2024-11-19 20:16:04.044116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:30.462 [2024-11-19 20:16:04.044124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.462 [2024-11-19 20:16:04.044132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:30.462 [2024-11-19 20:16:04.044139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:30.462 [2024-11-19 20:16:04.044146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.462 [2024-11-19 20:16:04.044153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:30.462 [2024-11-19 20:16:04.044160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:30.462 [2024-11-19 20:16:04.044171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.462 [2024-11-19 20:16:04.044181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:30.462 [2024-11-19 20:16:04.044193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:30.462 [2024-11-19 20:16:04.044206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.462 [2024-11-19 20:16:04.044249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.462 [2024-11-19 20:16:04.044260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:30.462 [2024-11-19 20:16:04.044268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:30.462 [2024-11-19 20:16:04.044276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:30.462 [2024-11-19 20:16:04.044283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:30.462 [2024-11-19 20:16:04.044291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:30.462 [2024-11-19 20:16:04.044299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:30.462 [2024-11-19 20:16:04.044308] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:30.462 [2024-11-19 20:16:04.044319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:30.462 [2024-11-19 20:16:04.044335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:30.462 [2024-11-19 20:16:04.044343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:30.462 [2024-11-19 20:16:04.044350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:30.462 [2024-11-19 20:16:04.044357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:30.462 [2024-11-19 20:16:04.044365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:30.462 [2024-11-19 20:16:04.044372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:30.462 [2024-11-19 20:16:04.044380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:30.462 [2024-11-19 20:16:04.044387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:30.462 [2024-11-19 20:16:04.044394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:30.462 [2024-11-19 20:16:04.044432] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:30.462 [2024-11-19 20:16:04.044443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.462 [2024-11-19 20:16:04.044459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:30.462 [2024-11-19 20:16:04.044467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:30.462 [2024-11-19 20:16:04.044475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:30.462 [2024-11-19 20:16:04.044483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.044491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:30.462 [2024-11-19 20:16:04.044500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:26:30.462 [2024-11-19 20:16:04.044509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.077123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.077178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.462 [2024-11-19 20:16:04.077191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.563 ms 00:26:30.462 [2024-11-19 20:16:04.077201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.077318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.077335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:30.462 [2024-11-19 20:16:04.077344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:30.462 [2024-11-19 20:16:04.077353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.129977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.130035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.462 [2024-11-19 20:16:04.130049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.546 ms 00:26:30.462 [2024-11-19 20:16:04.130058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.130110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.130121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.462 [2024-11-19 20:16:04.130131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:30.462 [2024-11-19 20:16:04.130144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.130851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.130909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.462 [2024-11-19 20:16:04.130922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:26:30.462 [2024-11-19 20:16:04.130930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.131097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.131108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.462 [2024-11-19 20:16:04.131117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:26:30.462 [2024-11-19 20:16:04.131130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.147098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.147144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.462 [2024-11-19 20:16:04.147159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.947 ms 00:26:30.462 [2024-11-19 20:16:04.147168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.161813] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:30.462 [2024-11-19 20:16:04.161867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:30.462 [2024-11-19 20:16:04.161882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.161891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:30.462 [2024-11-19 20:16:04.161901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.570 ms 00:26:30.462 [2024-11-19 20:16:04.161908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.188074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.188135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:30.462 [2024-11-19 20:16:04.188149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.107 ms 00:26:30.462 [2024-11-19 20:16:04.188158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.201417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.201467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:30.462 [2024-11-19 20:16:04.201480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.199 ms 00:26:30.462 [2024-11-19 20:16:04.201487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.214587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.214636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:30.462 [2024-11-19 20:16:04.214649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:26:30.462 [2024-11-19 20:16:04.214658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.462 [2024-11-19 20:16:04.215340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.462 [2024-11-19 20:16:04.215367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:30.462 [2024-11-19 20:16:04.215378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:26:30.462 [2024-11-19 20:16:04.215390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.281582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.281652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:30.724 [2024-11-19 20:16:04.281677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.165 ms 00:26:30.724 [2024-11-19 20:16:04.281686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.293054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:30.724 [2024-11-19 20:16:04.296280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.296327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:30.724 [2024-11-19 20:16:04.296339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:26:30.724 [2024-11-19 20:16:04.296348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.296459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.296472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:30.724 [2024-11-19 20:16:04.296482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:30.724 [2024-11-19 20:16:04.296495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.297389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.297441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:30.724 [2024-11-19 20:16:04.297453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:26:30.724 [2024-11-19 20:16:04.297462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.297494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.297504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:30.724 [2024-11-19 20:16:04.297513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:30.724 [2024-11-19 20:16:04.297522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.297567] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:30.724 [2024-11-19 20:16:04.297581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.297590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:30.724 [2024-11-19 20:16:04.297599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:30.724 [2024-11-19 20:16:04.297608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.323435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.323657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:30.724 [2024-11-19 20:16:04.323682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:26:30.724 [2024-11-19 20:16:04.323699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.323790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.724 [2024-11-19 20:16:04.323801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:30.724 [2024-11-19 20:16:04.323811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:30.724 [2024-11-19 20:16:04.323819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.724 [2024-11-19 20:16:04.325107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.442 ms, result 0 00:26:32.114  [2024-11-19T20:16:06.854Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-19T20:16:07.799Z] Copying: 35/1024 [MB] (21 MBps) [2024-11-19T20:16:08.745Z] Copying: 51/1024 [MB] (16 MBps) [2024-11-19T20:16:09.763Z] Copying: 71/1024 [MB] (19 MBps) [2024-11-19T20:16:10.709Z] Copying: 89/1024 [MB] (17 MBps) [2024-11-19T20:16:11.653Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-19T20:16:12.600Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-19T20:16:13.542Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-19T20:16:14.928Z] Copying: 145/1024 [MB] (24 MBps) [2024-11-19T20:16:15.883Z] Copying: 157/1024 [MB] (12 MBps) [2024-11-19T20:16:16.822Z] Copying: 175/1024 [MB] (17 MBps) [2024-11-19T20:16:17.762Z] Copying: 192/1024 [MB] (17 MBps) [2024-11-19T20:16:18.708Z] Copying: 222/1024 [MB] (30 MBps) [2024-11-19T20:16:19.647Z] Copying: 239/1024 [MB] (16 MBps) [2024-11-19T20:16:20.592Z] Copying: 257/1024 [MB] (18 MBps) [2024-11-19T20:16:21.538Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-19T20:16:22.925Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-19T20:16:23.865Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-19T20:16:24.808Z] Copying: 310/1024 [MB] (12 MBps) [2024-11-19T20:16:25.747Z] Copying: 328/1024 [MB] (18 MBps) [2024-11-19T20:16:26.693Z] Copying: 355/1024 [MB] (26 MBps) [2024-11-19T20:16:27.636Z] Copying: 374/1024 [MB] (19 MBps) [2024-11-19T20:16:28.579Z] Copying: 398/1024 [MB] (24 MBps) [2024-11-19T20:16:29.524Z] Copying: 421/1024 [MB] (23 MBps) [2024-11-19T20:16:30.913Z] Copying: 441/1024 [MB] (20 MBps) [2024-11-19T20:16:31.858Z] Copying: 462/1024 [MB] (20 MBps) [2024-11-19T20:16:32.802Z] Copying: 474/1024 [MB] (12 MBps) [2024-11-19T20:16:33.747Z] Copying: 492/1024 [MB] (17 MBps) [2024-11-19T20:16:34.693Z] Copying: 508/1024 [MB] (15 MBps) [2024-11-19T20:16:35.639Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-19T20:16:36.585Z] Copying: 530/1024 [MB] (11 MBps) [2024-11-19T20:16:37.536Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-19T20:16:38.926Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-19T20:16:39.871Z] Copying: 573/1024 [MB] (17 MBps) [2024-11-19T20:16:40.817Z] Copying: 593/1024 [MB] (19 MBps) [2024-11-19T20:16:41.891Z] Copying: 606/1024 [MB] (12 MBps) [2024-11-19T20:16:42.836Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-19T20:16:43.782Z] Copying: 628/1024 [MB] (11 MBps) [2024-11-19T20:16:44.729Z] Copying: 640/1024 [MB] (11 MBps) [2024-11-19T20:16:45.674Z] Copying: 651/1024 [MB] (11 MBps) [2024-11-19T20:16:46.616Z] Copying: 662/1024 [MB] (11 MBps) [2024-11-19T20:16:47.560Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-19T20:16:48.553Z] Copying: 699/1024 [MB] (25 MBps) [2024-11-19T20:16:49.941Z] Copying: 713/1024 [MB] (14 MBps) [2024-11-19T20:16:50.883Z] Copying: 734/1024 [MB] (20 MBps) [2024-11-19T20:16:51.826Z] Copying: 753/1024 [MB] (19 MBps) [2024-11-19T20:16:52.770Z] Copying: 769/1024 [MB] (16 MBps) [2024-11-19T20:16:53.714Z] Copying: 789/1024 [MB] (19 MBps) [2024-11-19T20:16:54.659Z] Copying: 806/1024 [MB] (16 MBps) [2024-11-19T20:16:55.601Z] Copying: 816/1024 [MB] (10 MBps) [2024-11-19T20:16:56.542Z] Copying: 827/1024 [MB] (10 MBps) [2024-11-19T20:16:57.930Z] Copying: 843/1024 [MB] (16 MBps) [2024-11-19T20:16:58.871Z] Copying: 865/1024 [MB] (21 MBps) [2024-11-19T20:16:59.825Z] Copying: 888/1024 [MB] (23 MBps) [2024-11-19T20:17:00.770Z] Copying: 907/1024 [MB] (18 MBps) [2024-11-19T20:17:01.714Z] Copying: 920/1024 [MB] (13 MBps) [2024-11-19T20:17:02.657Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-19T20:17:03.601Z] Copying: 952/1024 [MB] (18 MBps) [2024-11-19T20:17:04.544Z] Copying: 967/1024 [MB] (14 MBps) [2024-11-19T20:17:05.931Z] Copying: 983/1024 [MB] (15 MBps) [2024-11-19T20:17:06.874Z] Copying: 1002/1024 [MB] (19 MBps) [2024-11-19T20:17:07.136Z] Copying: 1020/1024 [MB] (17 MBps) [2024-11-19T20:17:07.398Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 20:17:07.292794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.292890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.604 [2024-11-19 20:17:07.292907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.604 [2024-11-19 20:17:07.292917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.292942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.604 [2024-11-19 20:17:07.296131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.296178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.604 [2024-11-19 20:17:07.296198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:27:33.604 [2024-11-19 20:17:07.296207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.296461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.296538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.604 [2024-11-19 20:17:07.296552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:27:33.604 [2024-11-19 20:17:07.296561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.300057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.300249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.604 [2024-11-19 20:17:07.300266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:27:33.604 [2024-11-19 20:17:07.300276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.307773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.307819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.604 [2024-11-19 20:17:07.307831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.461 ms 00:27:33.604 [2024-11-19 20:17:07.307839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.336653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.336857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.604 [2024-11-19 20:17:07.337342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.737 ms 00:27:33.604 [2024-11-19 20:17:07.337372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.354523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.354602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.604 [2024-11-19 20:17:07.354619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.980 ms 00:27:33.604 [2024-11-19 20:17:07.354628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.359664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.359857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.604 [2024-11-19 20:17:07.359879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.976 ms 00:27:33.604 [2024-11-19 20:17:07.359889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.604 [2024-11-19 20:17:07.386740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.604 [2024-11-19 20:17:07.386938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.604 [2024-11-19 20:17:07.386959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.829 ms 00:27:33.604 [2024-11-19 20:17:07.386967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.867 [2024-11-19 20:17:07.413987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.867 [2024-11-19 20:17:07.414211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.867 [2024-11-19 20:17:07.414260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.653 ms 00:27:33.867 [2024-11-19 20:17:07.414268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.867 [2024-11-19 20:17:07.440162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.867 [2024-11-19 20:17:07.440211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.867 [2024-11-19 20:17:07.440247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.819 ms 00:27:33.867 [2024-11-19 20:17:07.440255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.867 [2024-11-19 20:17:07.466095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.867 [2024-11-19 20:17:07.466299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.867 [2024-11-19 20:17:07.466321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.746 ms 00:27:33.867 [2024-11-19 20:17:07.466329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.867 [2024-11-19 20:17:07.466369] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.867 [2024-11-19 20:17:07.466387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:33.867 [2024-11-19 20:17:07.466406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:33.867 [2024-11-19 20:17:07.466416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.867 [2024-11-19 20:17:07.466665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.466997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.868 [2024-11-19 20:17:07.467253] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.868 [2024-11-19 20:17:07.467265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29e552cc-be1f-4a80-a058-31f7183132d9 00:27:33.868 [2024-11-19 20:17:07.467274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:33.868 [2024-11-19 20:17:07.467282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:33.868 [2024-11-19 20:17:07.467290] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:33.868 [2024-11-19 20:17:07.467298] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:33.868 [2024-11-19 20:17:07.467306] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.868 [2024-11-19 20:17:07.467316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.868 [2024-11-19 20:17:07.467331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.869 [2024-11-19 20:17:07.467338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.869 [2024-11-19 20:17:07.467344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.869 [2024-11-19 20:17:07.467351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.869 [2024-11-19 20:17:07.467360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.869 [2024-11-19 20:17:07.467370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:27:33.869 [2024-11-19 20:17:07.467378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.481236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.869 [2024-11-19 20:17:07.481411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.869 [2024-11-19 20:17:07.481429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.819 ms 00:27:33.869 [2024-11-19 20:17:07.481437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.481831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.869 [2024-11-19 20:17:07.481841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.869 [2024-11-19 20:17:07.481859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:27:33.869 [2024-11-19 20:17:07.481867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.519070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.869 [2024-11-19 20:17:07.519123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.869 [2024-11-19 20:17:07.519136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.869 [2024-11-19 20:17:07.519146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.519214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.869 [2024-11-19 20:17:07.519248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.869 [2024-11-19 20:17:07.519263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.869 [2024-11-19 20:17:07.519272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.519370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.869 [2024-11-19 20:17:07.519381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.869 [2024-11-19 20:17:07.519390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.869 [2024-11-19 20:17:07.519399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.519415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.869 [2024-11-19 20:17:07.519424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.869 [2024-11-19 20:17:07.519431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.869 [2024-11-19 20:17:07.519442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.869 [2024-11-19 20:17:07.603174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.869 [2024-11-19 20:17:07.603262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.869 [2024-11-19 20:17:07.603277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.869 [2024-11-19 20:17:07.603286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.671919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.671976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.131 [2024-11-19 20:17:07.671989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.131 [2024-11-19 20:17:07.672088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.131 [2024-11-19 20:17:07.672204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.131 [2024-11-19 20:17:07.672366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.131 [2024-11-19 20:17:07.672449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.131 [2024-11-19 20:17:07.672521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.131 [2024-11-19 20:17:07.672591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.131 [2024-11-19 20:17:07.672600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.131 [2024-11-19 20:17:07.672609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.131 [2024-11-19 20:17:07.672747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.918 ms, result 0 00:27:34.703 00:27:34.704 00:27:34.704 20:17:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:37.254 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:37.254 Process with pid 77517 is not found 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77517 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77517 ']' 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77517 00:27:37.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77517) - No such process 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77517 is not found' 00:27:37.254 20:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:37.517 Remove shared memory files 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:37.517 ************************************ 00:27:37.517 END TEST ftl_dirty_shutdown 00:27:37.517 ************************************ 00:27:37.517 00:27:37.517 real 4m44.213s 00:27:37.517 user 5m12.730s 00:27:37.517 sys 0m27.292s 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.517 20:17:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.517 20:17:11 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:37.517 20:17:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.517 20:17:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.517 20:17:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:37.517 ************************************ 00:27:37.517 START TEST ftl_upgrade_shutdown 00:27:37.517 ************************************ 00:27:37.517 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:37.780 * Looking for test storage... 00:27:37.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:37.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.780 --rc genhtml_branch_coverage=1 00:27:37.780 --rc genhtml_function_coverage=1 00:27:37.780 --rc genhtml_legend=1 00:27:37.780 --rc geninfo_all_blocks=1 00:27:37.780 --rc geninfo_unexecuted_blocks=1 00:27:37.780 00:27:37.780 ' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:37.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.780 --rc genhtml_branch_coverage=1 00:27:37.780 --rc genhtml_function_coverage=1 00:27:37.780 --rc genhtml_legend=1 00:27:37.780 --rc geninfo_all_blocks=1 00:27:37.780 --rc geninfo_unexecuted_blocks=1 00:27:37.780 00:27:37.780 ' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:37.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.780 --rc genhtml_branch_coverage=1 00:27:37.780 --rc genhtml_function_coverage=1 00:27:37.780 --rc genhtml_legend=1 00:27:37.780 --rc geninfo_all_blocks=1 00:27:37.780 --rc geninfo_unexecuted_blocks=1 00:27:37.780 00:27:37.780 ' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:37.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.780 --rc genhtml_branch_coverage=1 00:27:37.780 --rc genhtml_function_coverage=1 00:27:37.780 --rc genhtml_legend=1 00:27:37.780 --rc geninfo_all_blocks=1 00:27:37.780 --rc geninfo_unexecuted_blocks=1 00:27:37.780 00:27:37.780 ' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80559 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80559 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80559 ']' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:37.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.780 20:17:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.780 [2024-11-19 20:17:11.561089] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:37.780 [2024-11-19 20:17:11.561477] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:27:38.042 [2024-11-19 20:17:11.730285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.305 [2024-11-19 20:17:11.858356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:38.902 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:39.186 20:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:39.457 { 00:27:39.457 "name": "basen1", 00:27:39.457 "aliases": [ 00:27:39.457 "1935f503-8900-4d84-8b38-831a64a6750f" 00:27:39.457 ], 00:27:39.457 "product_name": "NVMe disk", 00:27:39.457 "block_size": 4096, 00:27:39.457 "num_blocks": 1310720, 00:27:39.457 "uuid": "1935f503-8900-4d84-8b38-831a64a6750f", 00:27:39.457 "numa_id": -1, 00:27:39.457 "assigned_rate_limits": { 00:27:39.457 "rw_ios_per_sec": 0, 00:27:39.457 "rw_mbytes_per_sec": 0, 00:27:39.457 "r_mbytes_per_sec": 0, 00:27:39.457 "w_mbytes_per_sec": 0 00:27:39.457 }, 00:27:39.457 "claimed": true, 00:27:39.457 "claim_type": "read_many_write_one", 00:27:39.457 "zoned": false, 00:27:39.457 "supported_io_types": { 00:27:39.457 "read": true, 00:27:39.457 "write": true, 00:27:39.457 "unmap": true, 00:27:39.457 "flush": true, 00:27:39.457 "reset": true, 00:27:39.457 "nvme_admin": true, 00:27:39.457 "nvme_io": true, 00:27:39.457 "nvme_io_md": false, 00:27:39.457 "write_zeroes": true, 00:27:39.457 "zcopy": false, 00:27:39.457 "get_zone_info": false, 00:27:39.457 "zone_management": false, 00:27:39.457 "zone_append": false, 00:27:39.457 "compare": true, 00:27:39.457 "compare_and_write": false, 00:27:39.457 "abort": true, 00:27:39.457 "seek_hole": false, 00:27:39.457 "seek_data": false, 00:27:39.457 "copy": true, 00:27:39.457 "nvme_iov_md": false 00:27:39.457 }, 00:27:39.457 "driver_specific": { 00:27:39.457 "nvme": [ 00:27:39.457 { 00:27:39.457 "pci_address": "0000:00:11.0", 00:27:39.457 "trid": { 00:27:39.457 "trtype": "PCIe", 00:27:39.457 "traddr": "0000:00:11.0" 00:27:39.457 }, 00:27:39.457 "ctrlr_data": { 00:27:39.457 "cntlid": 0, 00:27:39.457 "vendor_id": "0x1b36", 00:27:39.457 "model_number": "QEMU NVMe Ctrl", 00:27:39.457 "serial_number": "12341", 00:27:39.457 "firmware_revision": "8.0.0", 00:27:39.457 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:39.457 "oacs": { 00:27:39.457 "security": 0, 00:27:39.457 "format": 1, 00:27:39.457 "firmware": 0, 00:27:39.457 "ns_manage": 1 00:27:39.457 }, 00:27:39.457 "multi_ctrlr": false, 00:27:39.457 "ana_reporting": false 00:27:39.457 }, 00:27:39.457 "vs": { 00:27:39.457 "nvme_version": "1.4" 00:27:39.457 }, 00:27:39.457 "ns_data": { 00:27:39.457 "id": 1, 00:27:39.457 "can_share": false 00:27:39.457 } 00:27:39.457 } 00:27:39.457 ], 00:27:39.457 "mp_policy": "active_passive" 00:27:39.457 } 00:27:39.457 } 00:27:39.457 ]' 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:39.457 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:39.718 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9458583c-d6aa-40e2-b1b5-2fadeed8823b 00:27:39.718 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:39.718 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9458583c-d6aa-40e2-b1b5-2fadeed8823b 00:27:39.979 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:40.238 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2431ca7d-9570-4b86-94ff-c25849897066 00:27:40.238 20:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2431ca7d-9570-4b86-94ff-c25849897066 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a622e90d-96c6-4e86-bc51-90c0adb11cfa 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a622e90d-96c6-4e86-bc51-90c0adb11cfa ]] 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a622e90d-96c6-4e86-bc51-90c0adb11cfa 5120 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a622e90d-96c6-4e86-bc51-90c0adb11cfa 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a622e90d-96c6-4e86-bc51-90c0adb11cfa 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a622e90d-96c6-4e86-bc51-90c0adb11cfa 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a622e90d-96c6-4e86-bc51-90c0adb11cfa 00:27:40.496 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:40.496 { 00:27:40.496 "name": "a622e90d-96c6-4e86-bc51-90c0adb11cfa", 00:27:40.496 "aliases": [ 00:27:40.496 "lvs/basen1p0" 00:27:40.496 ], 00:27:40.496 "product_name": "Logical Volume", 00:27:40.496 "block_size": 4096, 00:27:40.496 "num_blocks": 5242880, 00:27:40.496 "uuid": "a622e90d-96c6-4e86-bc51-90c0adb11cfa", 00:27:40.496 "assigned_rate_limits": { 00:27:40.497 "rw_ios_per_sec": 0, 00:27:40.497 "rw_mbytes_per_sec": 0, 00:27:40.497 "r_mbytes_per_sec": 0, 00:27:40.497 "w_mbytes_per_sec": 0 00:27:40.497 }, 00:27:40.497 "claimed": false, 00:27:40.497 "zoned": false, 00:27:40.497 "supported_io_types": { 00:27:40.497 "read": true, 00:27:40.497 "write": true, 00:27:40.497 "unmap": true, 00:27:40.497 "flush": false, 00:27:40.497 "reset": true, 00:27:40.497 "nvme_admin": false, 00:27:40.497 "nvme_io": false, 00:27:40.497 "nvme_io_md": false, 00:27:40.497 "write_zeroes": true, 00:27:40.497 "zcopy": false, 00:27:40.497 "get_zone_info": false, 00:27:40.497 "zone_management": false, 00:27:40.497 "zone_append": false, 00:27:40.497 "compare": false, 00:27:40.497 "compare_and_write": false, 00:27:40.497 "abort": false, 00:27:40.497 "seek_hole": true, 00:27:40.497 "seek_data": true, 00:27:40.497 "copy": false, 00:27:40.497 "nvme_iov_md": false 00:27:40.497 }, 00:27:40.497 "driver_specific": { 00:27:40.497 "lvol": { 00:27:40.497 "lvol_store_uuid": "2431ca7d-9570-4b86-94ff-c25849897066", 00:27:40.497 "base_bdev": "basen1", 00:27:40.497 "thin_provision": true, 00:27:40.497 "num_allocated_clusters": 0, 00:27:40.497 "snapshot": false, 00:27:40.497 "clone": false, 00:27:40.497 "esnap_clone": false 00:27:40.497 } 00:27:40.497 } 00:27:40.497 } 00:27:40.497 ]' 00:27:40.497 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:40.755 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:41.014 20:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a622e90d-96c6-4e86-bc51-90c0adb11cfa -c cachen1p0 --l2p_dram_limit 2 00:27:41.273 [2024-11-19 20:17:14.982446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.982482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:41.273 [2024-11-19 20:17:14.982494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:41.273 [2024-11-19 20:17:14.982501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.982541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.982549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.273 [2024-11-19 20:17:14.982557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:41.273 [2024-11-19 20:17:14.982563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.982579] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:41.273 [2024-11-19 20:17:14.983143] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:41.273 [2024-11-19 20:17:14.983160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.983166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.273 [2024-11-19 20:17:14.983174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:27:41.273 [2024-11-19 20:17:14.983180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.983206] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d8cf7576-d0e5-4127-bb78-58c1ce2e397d 00:27:41.273 [2024-11-19 20:17:14.984167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.984186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:41.273 [2024-11-19 20:17:14.984193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:41.273 [2024-11-19 20:17:14.984200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.989021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.989049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:41.273 [2024-11-19 20:17:14.989058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.755 ms 00:27:41.273 [2024-11-19 20:17:14.989065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.989095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.989103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:41.273 [2024-11-19 20:17:14.989110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:41.273 [2024-11-19 20:17:14.989118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.989154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.989163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:41.273 [2024-11-19 20:17:14.989169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:41.273 [2024-11-19 20:17:14.989180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.989195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:41.273 [2024-11-19 20:17:14.992081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.992104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:41.273 [2024-11-19 20:17:14.992114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.888 ms 00:27:41.273 [2024-11-19 20:17:14.992119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.992140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.273 [2024-11-19 20:17:14.992147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:41.273 [2024-11-19 20:17:14.992154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:41.273 [2024-11-19 20:17:14.992160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.273 [2024-11-19 20:17:14.992180] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:41.273 [2024-11-19 20:17:14.992292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:41.273 [2024-11-19 20:17:14.992304] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:41.273 [2024-11-19 20:17:14.992312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:41.273 [2024-11-19 20:17:14.992322] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992329] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992337] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:41.274 [2024-11-19 20:17:14.992343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:41.274 [2024-11-19 20:17:14.992352] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:41.274 [2024-11-19 20:17:14.992357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:41.274 [2024-11-19 20:17:14.992364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.274 [2024-11-19 20:17:14.992370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:41.274 [2024-11-19 20:17:14.992377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:27:41.274 [2024-11-19 20:17:14.992382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.274 [2024-11-19 20:17:14.992446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.274 [2024-11-19 20:17:14.992452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:41.274 [2024-11-19 20:17:14.992460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:41.274 [2024-11-19 20:17:14.992470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.274 [2024-11-19 20:17:14.992550] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:41.274 [2024-11-19 20:17:14.992557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:41.274 [2024-11-19 20:17:14.992564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:41.274 [2024-11-19 20:17:14.992582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:41.274 [2024-11-19 20:17:14.992594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:41.274 [2024-11-19 20:17:14.992600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:41.274 [2024-11-19 20:17:14.992605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:41.274 [2024-11-19 20:17:14.992618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:41.274 [2024-11-19 20:17:14.992625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:41.274 [2024-11-19 20:17:14.992637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:41.274 [2024-11-19 20:17:14.992643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:41.274 [2024-11-19 20:17:14.992656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:41.274 [2024-11-19 20:17:14.992663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:41.274 [2024-11-19 20:17:14.992675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:41.274 [2024-11-19 20:17:14.992692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:41.274 [2024-11-19 20:17:14.992709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:41.274 [2024-11-19 20:17:14.992726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:41.274 [2024-11-19 20:17:14.992744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:41.274 [2024-11-19 20:17:14.992761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:41.274 [2024-11-19 20:17:14.992779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:41.274 [2024-11-19 20:17:14.992794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:41.274 [2024-11-19 20:17:14.992801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:41.274 [2024-11-19 20:17:14.992812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:41.274 [2024-11-19 20:17:14.992817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.274 [2024-11-19 20:17:14.992834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:41.274 [2024-11-19 20:17:14.992842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:41.274 [2024-11-19 20:17:14.992847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:41.274 [2024-11-19 20:17:14.992854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:41.274 [2024-11-19 20:17:14.992858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:41.274 [2024-11-19 20:17:14.992865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:41.274 [2024-11-19 20:17:14.992873] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:41.274 [2024-11-19 20:17:14.992881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:41.274 [2024-11-19 20:17:14.992895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:41.274 [2024-11-19 20:17:14.992914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:41.274 [2024-11-19 20:17:14.992923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:41.274 [2024-11-19 20:17:14.992929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:41.274 [2024-11-19 20:17:14.992936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:41.274 [2024-11-19 20:17:14.992979] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:41.274 [2024-11-19 20:17:14.992986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.992992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:41.274 [2024-11-19 20:17:14.993000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:41.274 [2024-11-19 20:17:14.993005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:41.274 [2024-11-19 20:17:14.993012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:41.274 [2024-11-19 20:17:14.993018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.274 [2024-11-19 20:17:14.993025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:41.274 [2024-11-19 20:17:14.993030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:27:41.274 [2024-11-19 20:17:14.993037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.274 [2024-11-19 20:17:14.993066] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:41.274 [2024-11-19 20:17:14.993075] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:44.570 [2024-11-19 20:17:18.335061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.570 [2024-11-19 20:17:18.335139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:44.570 [2024-11-19 20:17:18.335158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3341.979 ms 00:27:44.570 [2024-11-19 20:17:18.335172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.366782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.366847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:44.831 [2024-11-19 20:17:18.366862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.354 ms 00:27:44.831 [2024-11-19 20:17:18.366874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.366964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.366978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:44.831 [2024-11-19 20:17:18.366987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:44.831 [2024-11-19 20:17:18.367001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.402364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.402416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:44.831 [2024-11-19 20:17:18.402428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.325 ms 00:27:44.831 [2024-11-19 20:17:18.402439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.402475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.402490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:44.831 [2024-11-19 20:17:18.402499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:44.831 [2024-11-19 20:17:18.402509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.403107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.403146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:44.831 [2024-11-19 20:17:18.403157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:27:44.831 [2024-11-19 20:17:18.403168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.403241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.403254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:44.831 [2024-11-19 20:17:18.403265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:44.831 [2024-11-19 20:17:18.403278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.420597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.420643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:44.831 [2024-11-19 20:17:18.420654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.300 ms 00:27:44.831 [2024-11-19 20:17:18.420664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.433841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:44.831 [2024-11-19 20:17:18.435120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.435156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:44.831 [2024-11-19 20:17:18.435170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.368 ms 00:27:44.831 [2024-11-19 20:17:18.435177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.469916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.469968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:44.831 [2024-11-19 20:17:18.469986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.704 ms 00:27:44.831 [2024-11-19 20:17:18.469995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.470103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.470119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:44.831 [2024-11-19 20:17:18.470135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:44.831 [2024-11-19 20:17:18.470143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.495020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.495065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:44.831 [2024-11-19 20:17:18.495080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.821 ms 00:27:44.831 [2024-11-19 20:17:18.495089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.520491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.520533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:44.831 [2024-11-19 20:17:18.520548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.348 ms 00:27:44.831 [2024-11-19 20:17:18.520555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.521153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.521170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:44.831 [2024-11-19 20:17:18.521182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:27:44.831 [2024-11-19 20:17:18.521190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.831 [2024-11-19 20:17:18.603657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.831 [2024-11-19 20:17:18.603701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:44.831 [2024-11-19 20:17:18.603722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.401 ms 00:27:44.831 [2024-11-19 20:17:18.603730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.631456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.093 [2024-11-19 20:17:18.631503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:45.093 [2024-11-19 20:17:18.631527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.627 ms 00:27:45.093 [2024-11-19 20:17:18.631535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.657478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.093 [2024-11-19 20:17:18.657532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:45.093 [2024-11-19 20:17:18.657547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.888 ms 00:27:45.093 [2024-11-19 20:17:18.657554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.683252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.093 [2024-11-19 20:17:18.683296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:45.093 [2024-11-19 20:17:18.683310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.644 ms 00:27:45.093 [2024-11-19 20:17:18.683318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.683374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.093 [2024-11-19 20:17:18.683383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:45.093 [2024-11-19 20:17:18.683398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:45.093 [2024-11-19 20:17:18.683406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.683498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.093 [2024-11-19 20:17:18.683509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:45.093 [2024-11-19 20:17:18.683522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:45.093 [2024-11-19 20:17:18.683531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.093 [2024-11-19 20:17:18.684784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3701.838 ms, result 0 00:27:45.093 { 00:27:45.093 "name": "ftl", 00:27:45.093 "uuid": "d8cf7576-d0e5-4127-bb78-58c1ce2e397d" 00:27:45.093 } 00:27:45.093 20:17:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:45.355 [2024-11-19 20:17:18.915830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.355 20:17:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:45.615 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:45.615 [2024-11-19 20:17:19.344318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:45.615 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:45.876 [2024-11-19 20:17:19.565079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:45.876 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:46.448 Fill FTL, iteration 1 00:27:46.448 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:46.448 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80681 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80681 /var/tmp/spdk.tgt.sock 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80681 ']' 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:46.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.449 20:17:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.449 [2024-11-19 20:17:20.030973] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:46.449 [2024-11-19 20:17:20.031136] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80681 ] 00:27:46.449 [2024-11-19 20:17:20.192122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.711 [2024-11-19 20:17:20.343263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.279 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.279 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:47.279 20:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:47.539 ftln1 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80681 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80681 ']' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80681 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80681 00:27:47.798 killing process with pid 80681 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80681' 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80681 00:27:47.798 20:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80681 00:27:49.173 20:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:49.173 20:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:49.173 [2024-11-19 20:17:22.951636] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:49.173 [2024-11-19 20:17:22.951743] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80730 ] 00:27:49.432 [2024-11-19 20:17:23.108336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.432 [2024-11-19 20:17:23.194861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.814  [2024-11-19T20:17:25.552Z] Copying: 243/1024 [MB] (243 MBps) [2024-11-19T20:17:26.940Z] Copying: 482/1024 [MB] (239 MBps) [2024-11-19T20:17:27.883Z] Copying: 717/1024 [MB] (235 MBps) [2024-11-19T20:17:27.883Z] Copying: 954/1024 [MB] (237 MBps) [2024-11-19T20:17:28.449Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:27:54.655 00:27:54.656 Calculate MD5 checksum, iteration 1 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:54.656 20:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.914 [2024-11-19 20:17:28.505406] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:54.915 [2024-11-19 20:17:28.505517] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80787 ] 00:27:54.915 [2024-11-19 20:17:28.657639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.173 [2024-11-19 20:17:28.747870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.550  [2024-11-19T20:17:30.606Z] Copying: 780/1024 [MB] (780 MBps) [2024-11-19T20:17:31.179Z] Copying: 1024/1024 [MB] (average 746 MBps) 00:27:57.385 00:27:57.385 20:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:57.385 20:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:59.301 Fill FTL, iteration 2 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b7127623dfb0d75950bbccc7108b3e4d 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.301 20:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:59.563 [2024-11-19 20:17:33.140611] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:59.563 [2024-11-19 20:17:33.140745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80843 ] 00:27:59.563 [2024-11-19 20:17:33.311187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.825 [2024-11-19 20:17:33.452437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.213  [2024-11-19T20:17:35.953Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-19T20:17:36.898Z] Copying: 439/1024 [MB] (243 MBps) [2024-11-19T20:17:37.846Z] Copying: 672/1024 [MB] (233 MBps) [2024-11-19T20:17:38.420Z] Copying: 897/1024 [MB] (225 MBps) [2024-11-19T20:17:38.992Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:28:05.198 00:28:05.459 Calculate MD5 checksum, iteration 2 00:28:05.459 20:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.459 20:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.459 [2024-11-19 20:17:39.068762] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:05.459 [2024-11-19 20:17:39.068879] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80908 ] 00:28:05.459 [2024-11-19 20:17:39.224930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.720 [2024-11-19 20:17:39.324530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.106  [2024-11-19T20:17:41.473Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-19T20:17:44.079Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:28:10.285 00:28:10.285 20:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:10.285 20:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.820 20:17:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:12.820 20:17:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2d20ec28ce5c31759115f96ec01c2c0a 00:28:12.820 20:17:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:12.820 20:17:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:12.820 20:17:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:12.820 [2024-11-19 20:17:46.179571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.820 [2024-11-19 20:17:46.179611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:12.820 [2024-11-19 20:17:46.179621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:12.820 [2024-11-19 20:17:46.179628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.820 [2024-11-19 20:17:46.179645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.820 [2024-11-19 20:17:46.179652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:12.820 [2024-11-19 20:17:46.179658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:12.820 [2024-11-19 20:17:46.179667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.820 [2024-11-19 20:17:46.179682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.820 [2024-11-19 20:17:46.179692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:12.820 [2024-11-19 20:17:46.179698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:12.820 [2024-11-19 20:17:46.179704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.820 [2024-11-19 20:17:46.179751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.172 ms, result 0 00:28:12.820 true 00:28:12.820 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:12.820 { 00:28:12.820 "name": "ftl", 00:28:12.820 "properties": [ 00:28:12.820 { 00:28:12.820 "name": "superblock_version", 00:28:12.820 "value": 5, 00:28:12.820 "read-only": true 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "name": "base_device", 00:28:12.820 "bands": [ 00:28:12.820 { 00:28:12.820 "id": 0, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 1, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 2, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 3, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 4, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 5, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 6, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 7, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 8, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 9, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 10, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 11, 00:28:12.820 "state": "FREE", 00:28:12.820 "validity": 0.0 00:28:12.820 }, 00:28:12.820 { 00:28:12.820 "id": 12, 00:28:12.820 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 13, 00:28:12.821 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 14, 00:28:12.821 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 15, 00:28:12.821 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 16, 00:28:12.821 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 17, 00:28:12.821 "state": "FREE", 00:28:12.821 "validity": 0.0 00:28:12.821 } 00:28:12.821 ], 00:28:12.821 "read-only": true 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "name": "cache_device", 00:28:12.821 "type": "bdev", 00:28:12.821 "chunks": [ 00:28:12.821 { 00:28:12.821 "id": 0, 00:28:12.821 "state": "INACTIVE", 00:28:12.821 "utilization": 0.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 1, 00:28:12.821 "state": "CLOSED", 00:28:12.821 "utilization": 1.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 2, 00:28:12.821 "state": "CLOSED", 00:28:12.821 "utilization": 1.0 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 3, 00:28:12.821 "state": "OPEN", 00:28:12.821 "utilization": 0.001953125 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "id": 4, 00:28:12.821 "state": "OPEN", 00:28:12.821 "utilization": 0.0 00:28:12.821 } 00:28:12.821 ], 00:28:12.821 "read-only": true 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "name": "verbose_mode", 00:28:12.821 "value": true, 00:28:12.821 "unit": "", 00:28:12.821 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:12.821 }, 00:28:12.821 { 00:28:12.821 "name": "prep_upgrade_on_shutdown", 00:28:12.821 "value": false, 00:28:12.821 "unit": "", 00:28:12.821 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:12.821 } 00:28:12.821 ] 00:28:12.821 } 00:28:12.821 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:12.821 [2024-11-19 20:17:46.587896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.821 [2024-11-19 20:17:46.587929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:12.821 [2024-11-19 20:17:46.587939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:12.821 [2024-11-19 20:17:46.587945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.821 [2024-11-19 20:17:46.587962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.821 [2024-11-19 20:17:46.587968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:12.821 [2024-11-19 20:17:46.587974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:12.821 [2024-11-19 20:17:46.587980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.821 [2024-11-19 20:17:46.587995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.821 [2024-11-19 20:17:46.588000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:12.821 [2024-11-19 20:17:46.588006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:12.821 [2024-11-19 20:17:46.588011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.821 [2024-11-19 20:17:46.588052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.150 ms, result 0 00:28:12.821 true 00:28:12.821 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:12.821 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:12.821 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:13.079 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:13.079 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:13.079 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:13.338 [2024-11-19 20:17:46.976185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.338 [2024-11-19 20:17:46.976217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:13.338 [2024-11-19 20:17:46.976233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:13.338 [2024-11-19 20:17:46.976240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.338 [2024-11-19 20:17:46.976256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.338 [2024-11-19 20:17:46.976262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:13.338 [2024-11-19 20:17:46.976268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.338 [2024-11-19 20:17:46.976274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.338 [2024-11-19 20:17:46.976289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.338 [2024-11-19 20:17:46.976294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:13.338 [2024-11-19 20:17:46.976300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:13.338 [2024-11-19 20:17:46.976305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.338 [2024-11-19 20:17:46.976344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.149 ms, result 0 00:28:13.338 true 00:28:13.338 20:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:13.599 { 00:28:13.599 "name": "ftl", 00:28:13.599 "properties": [ 00:28:13.599 { 00:28:13.599 "name": "superblock_version", 00:28:13.599 "value": 5, 00:28:13.599 "read-only": true 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "name": "base_device", 00:28:13.599 "bands": [ 00:28:13.599 { 00:28:13.599 "id": 0, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 1, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 2, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 3, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 4, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 5, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 6, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 7, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 8, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 9, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 10, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 11, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 12, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 13, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 14, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 15, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 16, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 17, 00:28:13.599 "state": "FREE", 00:28:13.599 "validity": 0.0 00:28:13.599 } 00:28:13.599 ], 00:28:13.599 "read-only": true 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "name": "cache_device", 00:28:13.599 "type": "bdev", 00:28:13.599 "chunks": [ 00:28:13.599 { 00:28:13.599 "id": 0, 00:28:13.599 "state": "INACTIVE", 00:28:13.599 "utilization": 0.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 1, 00:28:13.599 "state": "CLOSED", 00:28:13.599 "utilization": 1.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 2, 00:28:13.599 "state": "CLOSED", 00:28:13.599 "utilization": 1.0 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 3, 00:28:13.599 "state": "OPEN", 00:28:13.599 "utilization": 0.001953125 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "id": 4, 00:28:13.599 "state": "OPEN", 00:28:13.599 "utilization": 0.0 00:28:13.599 } 00:28:13.599 ], 00:28:13.599 "read-only": true 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "name": "verbose_mode", 00:28:13.599 "value": true, 00:28:13.599 "unit": "", 00:28:13.599 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:13.599 }, 00:28:13.599 { 00:28:13.599 "name": "prep_upgrade_on_shutdown", 00:28:13.599 "value": true, 00:28:13.599 "unit": "", 00:28:13.599 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:13.599 } 00:28:13.599 ] 00:28:13.599 } 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80559 ]] 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80559 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80559 ']' 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80559 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80559 00:28:13.599 killing process with pid 80559 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80559' 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80559 00:28:13.599 20:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80559 00:28:14.170 [2024-11-19 20:17:47.953311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:14.431 [2024-11-19 20:17:47.969561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.431 [2024-11-19 20:17:47.969603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:14.431 [2024-11-19 20:17:47.969616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.431 [2024-11-19 20:17:47.969624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.431 [2024-11-19 20:17:47.969645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:14.431 [2024-11-19 20:17:47.972337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.431 [2024-11-19 20:17:47.972365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:14.431 [2024-11-19 20:17:47.972374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.678 ms 00:28:14.431 [2024-11-19 20:17:47.972382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.809885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.809940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:24.435 [2024-11-19 20:17:56.809952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8837.451 ms 00:28:24.435 [2024-11-19 20:17:56.809962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.811033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.811054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:24.435 [2024-11-19 20:17:56.811062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:28:24.435 [2024-11-19 20:17:56.811068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.811948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.811967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:24.435 [2024-11-19 20:17:56.811976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:28:24.435 [2024-11-19 20:17:56.811983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.819413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.819442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:24.435 [2024-11-19 20:17:56.819449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.397 ms 00:28:24.435 [2024-11-19 20:17:56.819455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.824280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.824316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:24.435 [2024-11-19 20:17:56.824325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.802 ms 00:28:24.435 [2024-11-19 20:17:56.824331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.824393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.824401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:24.435 [2024-11-19 20:17:56.824411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:24.435 [2024-11-19 20:17:56.824417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.831743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.831769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:24.435 [2024-11-19 20:17:56.831777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.316 ms 00:28:24.435 [2024-11-19 20:17:56.831783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.838739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.838763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:24.435 [2024-11-19 20:17:56.838770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.931 ms 00:28:24.435 [2024-11-19 20:17:56.838776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.845829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.845855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:24.435 [2024-11-19 20:17:56.845861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.030 ms 00:28:24.435 [2024-11-19 20:17:56.845867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.852845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.435 [2024-11-19 20:17:56.852870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:24.435 [2024-11-19 20:17:56.852876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.933 ms 00:28:24.435 [2024-11-19 20:17:56.852882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.435 [2024-11-19 20:17:56.852904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:24.435 [2024-11-19 20:17:56.852914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.435 [2024-11-19 20:17:56.852922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.435 [2024-11-19 20:17:56.852934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:24.435 [2024-11-19 20:17:56.852941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.852999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.853005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.853011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.853016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.853022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.436 [2024-11-19 20:17:56.853029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:24.436 [2024-11-19 20:17:56.853034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d8cf7576-d0e5-4127-bb78-58c1ce2e397d 00:28:24.436 [2024-11-19 20:17:56.853040] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:24.436 [2024-11-19 20:17:56.853046] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:24.436 [2024-11-19 20:17:56.853051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:24.436 [2024-11-19 20:17:56.853057] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:24.436 [2024-11-19 20:17:56.853063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:24.436 [2024-11-19 20:17:56.853071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:24.436 [2024-11-19 20:17:56.853076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:24.436 [2024-11-19 20:17:56.853082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:24.436 [2024-11-19 20:17:56.853088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:24.436 [2024-11-19 20:17:56.853094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.436 [2024-11-19 20:17:56.853102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:24.436 [2024-11-19 20:17:56.853107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:28:24.436 [2024-11-19 20:17:56.853113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.862531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.436 [2024-11-19 20:17:56.862558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:24.436 [2024-11-19 20:17:56.862565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.407 ms 00:28:24.436 [2024-11-19 20:17:56.862575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.862850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.436 [2024-11-19 20:17:56.862870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:24.436 [2024-11-19 20:17:56.862877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:28:24.436 [2024-11-19 20:17:56.862883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.895388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:56.895415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:24.436 [2024-11-19 20:17:56.895425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:56.895432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.895452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:56.895459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:24.436 [2024-11-19 20:17:56.895465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:56.895470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.895516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:56.895523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:24.436 [2024-11-19 20:17:56.895529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:56.895535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.895549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:56.895555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:24.436 [2024-11-19 20:17:56.895561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:56.895567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:56.953726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:56.953759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:24.436 [2024-11-19 20:17:56.953767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:56.953776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:24.436 [2024-11-19 20:17:57.001398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:24.436 [2024-11-19 20:17:57.001478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:24.436 [2024-11-19 20:17:57.001533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:24.436 [2024-11-19 20:17:57.001620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:24.436 [2024-11-19 20:17:57.001666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:24.436 [2024-11-19 20:17:57.001714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.436 [2024-11-19 20:17:57.001763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:24.436 [2024-11-19 20:17:57.001769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.436 [2024-11-19 20:17:57.001775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.436 [2024-11-19 20:17:57.001862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9032.256 ms, result 0 00:28:28.641 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:28.641 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:28.641 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:28.641 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:28.641 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81127 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81127 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81127 ']' 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.642 20:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.642 [2024-11-19 20:18:01.850803] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:28.642 [2024-11-19 20:18:01.850940] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81127 ] 00:28:28.642 [2024-11-19 20:18:02.012750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.642 [2024-11-19 20:18:02.099295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.900 [2024-11-19 20:18:02.662563] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:28.900 [2024-11-19 20:18:02.662615] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:29.161 [2024-11-19 20:18:02.805422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.161 [2024-11-19 20:18:02.805458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:29.161 [2024-11-19 20:18:02.805468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:29.161 [2024-11-19 20:18:02.805475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.805510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.805518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:29.162 [2024-11-19 20:18:02.805524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:29.162 [2024-11-19 20:18:02.805530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.805546] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:29.162 [2024-11-19 20:18:02.806043] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:29.162 [2024-11-19 20:18:02.806061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.806067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:29.162 [2024-11-19 20:18:02.806074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:28:29.162 [2024-11-19 20:18:02.806079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.807009] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:29.162 [2024-11-19 20:18:02.816421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.816449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:29.162 [2024-11-19 20:18:02.816461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.414 ms 00:28:29.162 [2024-11-19 20:18:02.816467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.816508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.816516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:29.162 [2024-11-19 20:18:02.816522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:29.162 [2024-11-19 20:18:02.816527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.820719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.820748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:29.162 [2024-11-19 20:18:02.820755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.145 ms 00:28:29.162 [2024-11-19 20:18:02.820761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.820803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.820811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:29.162 [2024-11-19 20:18:02.820817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:29.162 [2024-11-19 20:18:02.820822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.820856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.820864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:29.162 [2024-11-19 20:18:02.820872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:29.162 [2024-11-19 20:18:02.820878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.820892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:29.162 [2024-11-19 20:18:02.823616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.823640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:29.162 [2024-11-19 20:18:02.823647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.726 ms 00:28:29.162 [2024-11-19 20:18:02.823654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.823676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.823683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:29.162 [2024-11-19 20:18:02.823689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:29.162 [2024-11-19 20:18:02.823695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.823710] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:29.162 [2024-11-19 20:18:02.823723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:29.162 [2024-11-19 20:18:02.823751] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:29.162 [2024-11-19 20:18:02.823762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:29.162 [2024-11-19 20:18:02.823840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:29.162 [2024-11-19 20:18:02.823850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:29.162 [2024-11-19 20:18:02.823857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:29.162 [2024-11-19 20:18:02.823865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:29.162 [2024-11-19 20:18:02.823872] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:29.162 [2024-11-19 20:18:02.823880] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:29.162 [2024-11-19 20:18:02.823885] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:29.162 [2024-11-19 20:18:02.823891] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:29.162 [2024-11-19 20:18:02.823897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:29.162 [2024-11-19 20:18:02.823903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.823908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:29.162 [2024-11-19 20:18:02.823914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:28:29.162 [2024-11-19 20:18:02.823919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.823984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.162 [2024-11-19 20:18:02.823990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:29.162 [2024-11-19 20:18:02.823996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:29.162 [2024-11-19 20:18:02.824003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.162 [2024-11-19 20:18:02.824078] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:29.162 [2024-11-19 20:18:02.824096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:29.162 [2024-11-19 20:18:02.824103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:29.162 [2024-11-19 20:18:02.824121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:29.162 [2024-11-19 20:18:02.824132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:29.162 [2024-11-19 20:18:02.824138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:29.162 [2024-11-19 20:18:02.824144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:29.162 [2024-11-19 20:18:02.824155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:29.162 [2024-11-19 20:18:02.824160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:29.162 [2024-11-19 20:18:02.824170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:29.162 [2024-11-19 20:18:02.824175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:29.162 [2024-11-19 20:18:02.824185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:29.162 [2024-11-19 20:18:02.824190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:29.162 [2024-11-19 20:18:02.824200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:29.162 [2024-11-19 20:18:02.824205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:29.162 [2024-11-19 20:18:02.824215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:29.162 [2024-11-19 20:18:02.824229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:29.162 [2024-11-19 20:18:02.824245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:29.162 [2024-11-19 20:18:02.824251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:29.162 [2024-11-19 20:18:02.824262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:29.162 [2024-11-19 20:18:02.824267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:29.162 [2024-11-19 20:18:02.824277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:29.162 [2024-11-19 20:18:02.824282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:29.162 [2024-11-19 20:18:02.824292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:29.162 [2024-11-19 20:18:02.824297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:29.162 [2024-11-19 20:18:02.824307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:29.162 [2024-11-19 20:18:02.824322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:29.162 [2024-11-19 20:18:02.824328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.162 [2024-11-19 20:18:02.824333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:29.163 [2024-11-19 20:18:02.824339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:29.163 [2024-11-19 20:18:02.824345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:29.163 [2024-11-19 20:18:02.824350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.163 [2024-11-19 20:18:02.824358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:29.163 [2024-11-19 20:18:02.824373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:29.163 [2024-11-19 20:18:02.824379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:29.163 [2024-11-19 20:18:02.824384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:29.163 [2024-11-19 20:18:02.824389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:29.163 [2024-11-19 20:18:02.824394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:29.163 [2024-11-19 20:18:02.824400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:29.163 [2024-11-19 20:18:02.824406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:29.163 [2024-11-19 20:18:02.824419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:29.163 [2024-11-19 20:18:02.824436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:29.163 [2024-11-19 20:18:02.824441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:29.163 [2024-11-19 20:18:02.824447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:29.163 [2024-11-19 20:18:02.824453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:29.163 [2024-11-19 20:18:02.824490] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:29.163 [2024-11-19 20:18:02.824496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:29.163 [2024-11-19 20:18:02.824508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:29.163 [2024-11-19 20:18:02.824513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:29.163 [2024-11-19 20:18:02.824519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:29.163 [2024-11-19 20:18:02.824525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.163 [2024-11-19 20:18:02.824531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:29.163 [2024-11-19 20:18:02.824536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:28:29.163 [2024-11-19 20:18:02.824541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.163 [2024-11-19 20:18:02.824572] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:29.163 [2024-11-19 20:18:02.824579] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:33.373 [2024-11-19 20:18:06.379842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.373 [2024-11-19 20:18:06.379924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:33.373 [2024-11-19 20:18:06.379942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3555.254 ms 00:28:33.373 [2024-11-19 20:18:06.379952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.373 [2024-11-19 20:18:06.411442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.373 [2024-11-19 20:18:06.411508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:33.373 [2024-11-19 20:18:06.411524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.243 ms 00:28:33.373 [2024-11-19 20:18:06.411533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.373 [2024-11-19 20:18:06.411625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.373 [2024-11-19 20:18:06.411644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:33.373 [2024-11-19 20:18:06.411654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:33.373 [2024-11-19 20:18:06.411662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.373 [2024-11-19 20:18:06.446690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.373 [2024-11-19 20:18:06.446743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:33.373 [2024-11-19 20:18:06.446756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.963 ms 00:28:33.373 [2024-11-19 20:18:06.446768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.373 [2024-11-19 20:18:06.446811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.373 [2024-11-19 20:18:06.446820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:33.373 [2024-11-19 20:18:06.446829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:33.374 [2024-11-19 20:18:06.446838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.447453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.447490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:33.374 [2024-11-19 20:18:06.447501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:28:33.374 [2024-11-19 20:18:06.447512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.447568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.447579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:33.374 [2024-11-19 20:18:06.447590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:33.374 [2024-11-19 20:18:06.447599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.465102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.465152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:33.374 [2024-11-19 20:18:06.465163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.478 ms 00:28:33.374 [2024-11-19 20:18:06.465171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.479614] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:33.374 [2024-11-19 20:18:06.479669] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:33.374 [2024-11-19 20:18:06.479683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.479691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:33.374 [2024-11-19 20:18:06.479701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.378 ms 00:28:33.374 [2024-11-19 20:18:06.479709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.494722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.494787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:33.374 [2024-11-19 20:18:06.494800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.958 ms 00:28:33.374 [2024-11-19 20:18:06.494808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.507442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.507490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:33.374 [2024-11-19 20:18:06.507501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.578 ms 00:28:33.374 [2024-11-19 20:18:06.507509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.520134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.520182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:33.374 [2024-11-19 20:18:06.520195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.574 ms 00:28:33.374 [2024-11-19 20:18:06.520202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.520884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.520922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:33.374 [2024-11-19 20:18:06.520933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:28:33.374 [2024-11-19 20:18:06.520941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.593613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.593711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:33.374 [2024-11-19 20:18:06.593729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.645 ms 00:28:33.374 [2024-11-19 20:18:06.593738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.605822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:33.374 [2024-11-19 20:18:06.606885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.606929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:33.374 [2024-11-19 20:18:06.606941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.083 ms 00:28:33.374 [2024-11-19 20:18:06.606950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.607051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.607067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:33.374 [2024-11-19 20:18:06.607078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:33.374 [2024-11-19 20:18:06.607086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.607147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.607161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:33.374 [2024-11-19 20:18:06.607170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:33.374 [2024-11-19 20:18:06.607178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.607203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.607212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:33.374 [2024-11-19 20:18:06.607248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:33.374 [2024-11-19 20:18:06.607261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.607299] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:33.374 [2024-11-19 20:18:06.607313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.607323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:33.374 [2024-11-19 20:18:06.607332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:33.374 [2024-11-19 20:18:06.607342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.632732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.632792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:33.374 [2024-11-19 20:18:06.632805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.365 ms 00:28:33.374 [2024-11-19 20:18:06.632814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.632905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.632917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:33.374 [2024-11-19 20:18:06.632928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:33.374 [2024-11-19 20:18:06.632936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.634177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3828.233 ms, result 0 00:28:33.374 [2024-11-19 20:18:06.649158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.374 [2024-11-19 20:18:06.665172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:33.374 [2024-11-19 20:18:06.673355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:33.374 [2024-11-19 20:18:06.913364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.913424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:33.374 [2024-11-19 20:18:06.913440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:33.374 [2024-11-19 20:18:06.913454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.913479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.913490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:33.374 [2024-11-19 20:18:06.913499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.374 [2024-11-19 20:18:06.913508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.913529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.374 [2024-11-19 20:18:06.913538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:33.374 [2024-11-19 20:18:06.913546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.374 [2024-11-19 20:18:06.913554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.374 [2024-11-19 20:18:06.913617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.242 ms, result 0 00:28:33.374 true 00:28:33.374 20:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.374 { 00:28:33.374 "name": "ftl", 00:28:33.374 "properties": [ 00:28:33.374 { 00:28:33.374 "name": "superblock_version", 00:28:33.374 "value": 5, 00:28:33.374 "read-only": true 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "name": "base_device", 00:28:33.374 "bands": [ 00:28:33.374 { 00:28:33.374 "id": 0, 00:28:33.374 "state": "CLOSED", 00:28:33.374 "validity": 1.0 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 1, 00:28:33.374 "state": "CLOSED", 00:28:33.374 "validity": 1.0 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 2, 00:28:33.374 "state": "CLOSED", 00:28:33.374 "validity": 0.007843137254901933 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 3, 00:28:33.374 "state": "FREE", 00:28:33.374 "validity": 0.0 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 4, 00:28:33.374 "state": "FREE", 00:28:33.374 "validity": 0.0 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 5, 00:28:33.374 "state": "FREE", 00:28:33.374 "validity": 0.0 00:28:33.374 }, 00:28:33.374 { 00:28:33.374 "id": 6, 00:28:33.374 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 7, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 8, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 9, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 10, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 11, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 12, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 13, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 14, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 15, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 16, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 17, 00:28:33.375 "state": "FREE", 00:28:33.375 "validity": 0.0 00:28:33.375 } 00:28:33.375 ], 00:28:33.375 "read-only": true 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "name": "cache_device", 00:28:33.375 "type": "bdev", 00:28:33.375 "chunks": [ 00:28:33.375 { 00:28:33.375 "id": 0, 00:28:33.375 "state": "INACTIVE", 00:28:33.375 "utilization": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 1, 00:28:33.375 "state": "OPEN", 00:28:33.375 "utilization": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 2, 00:28:33.375 "state": "OPEN", 00:28:33.375 "utilization": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 3, 00:28:33.375 "state": "FREE", 00:28:33.375 "utilization": 0.0 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "id": 4, 00:28:33.375 "state": "FREE", 00:28:33.375 "utilization": 0.0 00:28:33.375 } 00:28:33.375 ], 00:28:33.375 "read-only": true 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "name": "verbose_mode", 00:28:33.375 "value": true, 00:28:33.375 "unit": "", 00:28:33.375 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:33.375 }, 00:28:33.375 { 00:28:33.375 "name": "prep_upgrade_on_shutdown", 00:28:33.375 "value": false, 00:28:33.375 "unit": "", 00:28:33.375 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:33.375 } 00:28:33.375 ] 00:28:33.375 } 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.636 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:33.897 Validate MD5 checksum, iteration 1 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:33.897 20:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:33.897 [2024-11-19 20:18:07.625042] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:33.897 [2024-11-19 20:18:07.625191] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81212 ] 00:28:34.159 [2024-11-19 20:18:07.793936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.159 [2024-11-19 20:18:07.939055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.074  [2024-11-19T20:18:10.440Z] Copying: 548/1024 [MB] (548 MBps) [2024-11-19T20:18:11.823Z] Copying: 1024/1024 [MB] (average 555 MBps) 00:28:38.029 00:28:38.029 20:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:38.029 20:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:39.940 Validate MD5 checksum, iteration 2 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7127623dfb0d75950bbccc7108b3e4d 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7127623dfb0d75950bbccc7108b3e4d != \b\7\1\2\7\6\2\3\d\f\b\0\d\7\5\9\5\0\b\b\c\c\c\7\1\0\8\b\3\e\4\d ]] 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:39.940 20:18:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:39.940 [2024-11-19 20:18:13.612607] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:39.940 [2024-11-19 20:18:13.612732] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81279 ] 00:28:40.202 [2024-11-19 20:18:13.774426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.202 [2024-11-19 20:18:13.919422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.114  [2024-11-19T20:18:16.167Z] Copying: 649/1024 [MB] (649 MBps) [2024-11-19T20:18:16.733Z] Copying: 1024/1024 [MB] (average 681 MBps) 00:28:42.939 00:28:42.939 20:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:42.939 20:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2d20ec28ce5c31759115f96ec01c2c0a 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2d20ec28ce5c31759115f96ec01c2c0a != \2\d\2\0\e\c\2\8\c\e\5\c\3\1\7\5\9\1\1\5\f\9\6\e\c\0\1\c\2\c\0\a ]] 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81127 ]] 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81127 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81335 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81335 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81335 ']' 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.469 20:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.469 [2024-11-19 20:18:18.920991] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:45.469 [2024-11-19 20:18:18.921117] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81335 ] 00:28:45.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81127 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:45.469 [2024-11-19 20:18:19.077607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.469 [2024-11-19 20:18:19.153465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.035 [2024-11-19 20:18:19.715412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:46.035 [2024-11-19 20:18:19.715461] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:46.294 [2024-11-19 20:18:19.858263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.858292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:46.295 [2024-11-19 20:18:19.858302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:46.295 [2024-11-19 20:18:19.858308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.858344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.858352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:46.295 [2024-11-19 20:18:19.858358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:46.295 [2024-11-19 20:18:19.858364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.858380] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:46.295 [2024-11-19 20:18:19.858920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:46.295 [2024-11-19 20:18:19.858936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.858943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:46.295 [2024-11-19 20:18:19.858949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:28:46.295 [2024-11-19 20:18:19.858954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.859155] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:46.295 [2024-11-19 20:18:19.871520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.871547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:46.295 [2024-11-19 20:18:19.871557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.365 ms 00:28:46.295 [2024-11-19 20:18:19.871564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.878253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.878274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:46.295 [2024-11-19 20:18:19.878284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:46.295 [2024-11-19 20:18:19.878290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.878525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.878538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:46.295 [2024-11-19 20:18:19.878545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:28:46.295 [2024-11-19 20:18:19.878551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.878585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.878594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:46.295 [2024-11-19 20:18:19.878599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:46.295 [2024-11-19 20:18:19.878605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.878624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.878630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:46.295 [2024-11-19 20:18:19.878636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:46.295 [2024-11-19 20:18:19.878641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.878663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:46.295 [2024-11-19 20:18:19.880897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.880916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:46.295 [2024-11-19 20:18:19.880923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.245 ms 00:28:46.295 [2024-11-19 20:18:19.880928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.880949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.880955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:46.295 [2024-11-19 20:18:19.880961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:46.295 [2024-11-19 20:18:19.880966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.880982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:46.295 [2024-11-19 20:18:19.880996] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:46.295 [2024-11-19 20:18:19.881022] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:46.295 [2024-11-19 20:18:19.881035] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:46.295 [2024-11-19 20:18:19.881112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:46.295 [2024-11-19 20:18:19.881120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:46.295 [2024-11-19 20:18:19.881128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:46.295 [2024-11-19 20:18:19.881135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:46.295 [2024-11-19 20:18:19.881154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:46.295 [2024-11-19 20:18:19.881160] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:46.295 [2024-11-19 20:18:19.881165] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:46.295 [2024-11-19 20:18:19.881171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.881178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:46.295 [2024-11-19 20:18:19.881184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:28:46.295 [2024-11-19 20:18:19.881190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.881262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.295 [2024-11-19 20:18:19.881269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:46.295 [2024-11-19 20:18:19.881275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:28:46.295 [2024-11-19 20:18:19.881280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.295 [2024-11-19 20:18:19.881355] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:46.295 [2024-11-19 20:18:19.881362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:46.295 [2024-11-19 20:18:19.881370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:46.295 [2024-11-19 20:18:19.881386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:46.295 [2024-11-19 20:18:19.881396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:46.295 [2024-11-19 20:18:19.881402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:46.295 [2024-11-19 20:18:19.881407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:46.295 [2024-11-19 20:18:19.881417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:46.295 [2024-11-19 20:18:19.881422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:46.295 [2024-11-19 20:18:19.881434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:46.295 [2024-11-19 20:18:19.881439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:46.295 [2024-11-19 20:18:19.881449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:46.295 [2024-11-19 20:18:19.881454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:46.295 [2024-11-19 20:18:19.881463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:46.295 [2024-11-19 20:18:19.881468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:46.295 [2024-11-19 20:18:19.881482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:46.295 [2024-11-19 20:18:19.881486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:46.295 [2024-11-19 20:18:19.881496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:46.295 [2024-11-19 20:18:19.881501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:46.295 [2024-11-19 20:18:19.881510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:46.295 [2024-11-19 20:18:19.881515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:46.295 [2024-11-19 20:18:19.881525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:46.295 [2024-11-19 20:18:19.881529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:46.295 [2024-11-19 20:18:19.881539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:46.295 [2024-11-19 20:18:19.881543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:46.295 [2024-11-19 20:18:19.881553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.295 [2024-11-19 20:18:19.881563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:46.296 [2024-11-19 20:18:19.881568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:46.296 [2024-11-19 20:18:19.881572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.296 [2024-11-19 20:18:19.881577] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:46.296 [2024-11-19 20:18:19.881583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:46.296 [2024-11-19 20:18:19.881588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:46.296 [2024-11-19 20:18:19.881594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:46.296 [2024-11-19 20:18:19.881600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:46.296 [2024-11-19 20:18:19.881605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:46.296 [2024-11-19 20:18:19.881610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:46.296 [2024-11-19 20:18:19.881615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:46.296 [2024-11-19 20:18:19.881620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:46.296 [2024-11-19 20:18:19.881625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:46.296 [2024-11-19 20:18:19.881631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:46.296 [2024-11-19 20:18:19.881637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:46.296 [2024-11-19 20:18:19.881649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:46.296 [2024-11-19 20:18:19.881664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:46.296 [2024-11-19 20:18:19.881670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:46.296 [2024-11-19 20:18:19.881675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:46.296 [2024-11-19 20:18:19.881680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:46.296 [2024-11-19 20:18:19.881716] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:46.296 [2024-11-19 20:18:19.881722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:46.296 [2024-11-19 20:18:19.881733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:46.296 [2024-11-19 20:18:19.881739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:46.296 [2024-11-19 20:18:19.881744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:46.296 [2024-11-19 20:18:19.881749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.881756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:46.296 [2024-11-19 20:18:19.881761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:28:46.296 [2024-11-19 20:18:19.881769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.900515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.900536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:46.296 [2024-11-19 20:18:19.900543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.708 ms 00:28:46.296 [2024-11-19 20:18:19.900549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.900575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.900581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:46.296 [2024-11-19 20:18:19.900588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:46.296 [2024-11-19 20:18:19.900593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.924905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.924931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:46.296 [2024-11-19 20:18:19.924939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.275 ms 00:28:46.296 [2024-11-19 20:18:19.924945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.924965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.924971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:46.296 [2024-11-19 20:18:19.924978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:46.296 [2024-11-19 20:18:19.924983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.925053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.925065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:46.296 [2024-11-19 20:18:19.925072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:46.296 [2024-11-19 20:18:19.925078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.925108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.925115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:46.296 [2024-11-19 20:18:19.925122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:46.296 [2024-11-19 20:18:19.925127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.936696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.936721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:46.296 [2024-11-19 20:18:19.936728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.553 ms 00:28:46.296 [2024-11-19 20:18:19.936734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.936804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.936815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:46.296 [2024-11-19 20:18:19.936822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.296 [2024-11-19 20:18:19.936827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.964409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.964444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:46.296 [2024-11-19 20:18:19.964458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.567 ms 00:28:46.296 [2024-11-19 20:18:19.964466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:19.973650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:19.973673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:46.296 [2024-11-19 20:18:19.973686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:28:46.296 [2024-11-19 20:18:19.973692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:20.017296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:20.017330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:46.296 [2024-11-19 20:18:20.017343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.562 ms 00:28:46.296 [2024-11-19 20:18:20.017349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:20.017452] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:46.296 [2024-11-19 20:18:20.017527] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:46.296 [2024-11-19 20:18:20.017599] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:46.296 [2024-11-19 20:18:20.017669] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:46.296 [2024-11-19 20:18:20.017676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:20.017682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:46.296 [2024-11-19 20:18:20.017688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:28:46.296 [2024-11-19 20:18:20.017695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:20.017734] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:46.296 [2024-11-19 20:18:20.017742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:20.017750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:46.296 [2024-11-19 20:18:20.017757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:46.296 [2024-11-19 20:18:20.017763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:20.028943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:20.028970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:46.296 [2024-11-19 20:18:20.028978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.163 ms 00:28:46.296 [2024-11-19 20:18:20.028985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.296 [2024-11-19 20:18:20.035627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.296 [2024-11-19 20:18:20.035648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:46.297 [2024-11-19 20:18:20.035655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:46.297 [2024-11-19 20:18:20.035662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.297 [2024-11-19 20:18:20.035723] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:46.297 [2024-11-19 20:18:20.035831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.297 [2024-11-19 20:18:20.035841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:46.297 [2024-11-19 20:18:20.035847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.109 ms 00:28:46.297 [2024-11-19 20:18:20.035853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.866 [2024-11-19 20:18:20.638753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.866 [2024-11-19 20:18:20.638828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:46.866 [2024-11-19 20:18:20.638845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 602.231 ms 00:28:46.866 [2024-11-19 20:18:20.638854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.866 [2024-11-19 20:18:20.643538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.866 [2024-11-19 20:18:20.643582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:46.866 [2024-11-19 20:18:20.643594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.502 ms 00:28:46.866 [2024-11-19 20:18:20.643602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.866 [2024-11-19 20:18:20.644622] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:46.866 [2024-11-19 20:18:20.644664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.866 [2024-11-19 20:18:20.644675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:46.866 [2024-11-19 20:18:20.644686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:28:46.866 [2024-11-19 20:18:20.644695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.866 [2024-11-19 20:18:20.644734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.866 [2024-11-19 20:18:20.644744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:46.866 [2024-11-19 20:18:20.644754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:46.866 [2024-11-19 20:18:20.644762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.866 [2024-11-19 20:18:20.644807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 609.076 ms, result 0 00:28:46.866 [2024-11-19 20:18:20.644854] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:46.866 [2024-11-19 20:18:20.644965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.866 [2024-11-19 20:18:20.644979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:46.866 [2024-11-19 20:18:20.644987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.113 ms 00:28:46.866 [2024-11-19 20:18:20.644995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.314925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.314994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:47.809 [2024-11-19 20:18:21.315009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 668.781 ms 00:28:47.809 [2024-11-19 20:18:21.315018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.319916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.319957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:47.809 [2024-11-19 20:18:21.319969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.650 ms 00:28:47.809 [2024-11-19 20:18:21.319977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.320919] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:47.809 [2024-11-19 20:18:21.320959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.320969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:47.809 [2024-11-19 20:18:21.320978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.950 ms 00:28:47.809 [2024-11-19 20:18:21.320986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.321026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.321036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:47.809 [2024-11-19 20:18:21.321044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:47.809 [2024-11-19 20:18:21.321051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.321092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 676.232 ms, result 0 00:28:47.809 [2024-11-19 20:18:21.321139] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:47.809 [2024-11-19 20:18:21.321151] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:47.809 [2024-11-19 20:18:21.321161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.321170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:47.809 [2024-11-19 20:18:21.321178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1285.448 ms 00:28:47.809 [2024-11-19 20:18:21.321187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.321217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.321243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:47.809 [2024-11-19 20:18:21.321256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:47.809 [2024-11-19 20:18:21.321265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.333595] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:47.809 [2024-11-19 20:18:21.333727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.333738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:47.809 [2024-11-19 20:18:21.333748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.445 ms 00:28:47.809 [2024-11-19 20:18:21.333757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.334494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.334513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:47.809 [2024-11-19 20:18:21.334527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.658 ms 00:28:47.809 [2024-11-19 20:18:21.334534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.336799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.336821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:47.809 [2024-11-19 20:18:21.336832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.247 ms 00:28:47.809 [2024-11-19 20:18:21.336841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.336886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.336895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:47.809 [2024-11-19 20:18:21.336904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:47.809 [2024-11-19 20:18:21.336916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.337028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.337039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:47.809 [2024-11-19 20:18:21.337048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:47.809 [2024-11-19 20:18:21.337056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.337077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.337086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:47.809 [2024-11-19 20:18:21.337094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:47.809 [2024-11-19 20:18:21.337101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.337136] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:47.809 [2024-11-19 20:18:21.337148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.337156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:47.809 [2024-11-19 20:18:21.337164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:47.809 [2024-11-19 20:18:21.337173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.337239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.809 [2024-11-19 20:18:21.337250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:47.809 [2024-11-19 20:18:21.337258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:47.809 [2024-11-19 20:18:21.337266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.809 [2024-11-19 20:18:21.338394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1479.603 ms, result 0 00:28:47.809 [2024-11-19 20:18:21.354092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.809 [2024-11-19 20:18:21.370091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:47.809 [2024-11-19 20:18:21.379175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:47.809 Validate MD5 checksum, iteration 1 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.809 20:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:47.809 [2024-11-19 20:18:21.535314] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:47.809 [2024-11-19 20:18:21.535422] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81364 ] 00:28:48.068 [2024-11-19 20:18:21.694637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.068 [2024-11-19 20:18:21.802217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.977  [2024-11-19T20:18:24.030Z] Copying: 607/1024 [MB] (607 MBps) [2024-11-19T20:18:25.405Z] Copying: 1024/1024 [MB] (average 601 MBps) 00:28:51.611 00:28:51.611 20:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:51.611 20:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.575 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7127623dfb0d75950bbccc7108b3e4d 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7127623dfb0d75950bbccc7108b3e4d != \b\7\1\2\7\6\2\3\d\f\b\0\d\7\5\9\5\0\b\b\c\c\c\7\1\0\8\b\3\e\4\d ]] 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:53.576 Validate MD5 checksum, iteration 2 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:53.576 20:18:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.576 [2024-11-19 20:18:27.101266] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:53.576 [2024-11-19 20:18:27.101912] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81431 ] 00:28:53.576 [2024-11-19 20:18:27.257143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.576 [2024-11-19 20:18:27.344382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.488  [2024-11-19T20:18:29.541Z] Copying: 658/1024 [MB] (658 MBps) [2024-11-19T20:18:33.738Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:28:59.944 00:28:59.944 20:18:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:59.944 20:18:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2d20ec28ce5c31759115f96ec01c2c0a 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2d20ec28ce5c31759115f96ec01c2c0a != \2\d\2\0\e\c\2\8\c\e\5\c\3\1\7\5\9\1\1\5\f\9\6\e\c\0\1\c\2\c\0\a ]] 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:01.847 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81335 ]] 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81335 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81335 ']' 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81335 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81335 00:29:02.106 killing process with pid 81335 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81335' 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81335 00:29:02.106 20:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81335 00:29:02.674 [2024-11-19 20:18:36.273633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:02.674 [2024-11-19 20:18:36.284503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.284537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:02.674 [2024-11-19 20:18:36.284546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.674 [2024-11-19 20:18:36.284553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.284571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:02.674 [2024-11-19 20:18:36.286608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.286633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:02.674 [2024-11-19 20:18:36.286642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.027 ms 00:29:02.674 [2024-11-19 20:18:36.286651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.286857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.286881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:02.674 [2024-11-19 20:18:36.286888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:29:02.674 [2024-11-19 20:18:36.286894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.288003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.288026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:02.674 [2024-11-19 20:18:36.288033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:29:02.674 [2024-11-19 20:18:36.288039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.288900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.288919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:02.674 [2024-11-19 20:18:36.288926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:29:02.674 [2024-11-19 20:18:36.288933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.295988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.296014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:02.674 [2024-11-19 20:18:36.296021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.030 ms 00:29:02.674 [2024-11-19 20:18:36.296027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.300214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.300245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:02.674 [2024-11-19 20:18:36.300253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.157 ms 00:29:02.674 [2024-11-19 20:18:36.300260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.300327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.300336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:02.674 [2024-11-19 20:18:36.300343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:02.674 [2024-11-19 20:18:36.300348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.307528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.307554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:02.674 [2024-11-19 20:18:36.307561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.164 ms 00:29:02.674 [2024-11-19 20:18:36.307566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.314807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.314832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:02.674 [2024-11-19 20:18:36.314838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.217 ms 00:29:02.674 [2024-11-19 20:18:36.314844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.321755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.321779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:02.674 [2024-11-19 20:18:36.321786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.887 ms 00:29:02.674 [2024-11-19 20:18:36.321792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.329016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.674 [2024-11-19 20:18:36.329041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:02.674 [2024-11-19 20:18:36.329047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.180 ms 00:29:02.674 [2024-11-19 20:18:36.329052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.674 [2024-11-19 20:18:36.329075] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:02.674 [2024-11-19 20:18:36.329086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:02.674 [2024-11-19 20:18:36.329094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:02.674 [2024-11-19 20:18:36.329100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:02.675 [2024-11-19 20:18:36.329106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:02.675 [2024-11-19 20:18:36.329191] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:02.675 [2024-11-19 20:18:36.329196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d8cf7576-d0e5-4127-bb78-58c1ce2e397d 00:29:02.675 [2024-11-19 20:18:36.329202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:02.675 [2024-11-19 20:18:36.329208] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:02.675 [2024-11-19 20:18:36.329213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:02.675 [2024-11-19 20:18:36.329227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:02.675 [2024-11-19 20:18:36.329233] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:02.675 [2024-11-19 20:18:36.329238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:02.675 [2024-11-19 20:18:36.329244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:02.675 [2024-11-19 20:18:36.329248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:02.675 [2024-11-19 20:18:36.329254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:02.675 [2024-11-19 20:18:36.329259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.675 [2024-11-19 20:18:36.329265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:02.675 [2024-11-19 20:18:36.329275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:29:02.675 [2024-11-19 20:18:36.329281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.338757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.675 [2024-11-19 20:18:36.338780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:02.675 [2024-11-19 20:18:36.338787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.456 ms 00:29:02.675 [2024-11-19 20:18:36.338793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.339064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.675 [2024-11-19 20:18:36.339080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:02.675 [2024-11-19 20:18:36.339086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:29:02.675 [2024-11-19 20:18:36.339092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.372115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.675 [2024-11-19 20:18:36.372142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:02.675 [2024-11-19 20:18:36.372149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.675 [2024-11-19 20:18:36.372155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.372180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.675 [2024-11-19 20:18:36.372187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:02.675 [2024-11-19 20:18:36.372193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.675 [2024-11-19 20:18:36.372199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.372254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.675 [2024-11-19 20:18:36.372262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:02.675 [2024-11-19 20:18:36.372268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.675 [2024-11-19 20:18:36.372274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.372287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.675 [2024-11-19 20:18:36.372296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:02.675 [2024-11-19 20:18:36.372302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.675 [2024-11-19 20:18:36.372308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.675 [2024-11-19 20:18:36.431614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.675 [2024-11-19 20:18:36.431644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:02.675 [2024-11-19 20:18:36.431652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.675 [2024-11-19 20:18:36.431658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:02.934 [2024-11-19 20:18:36.480325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.934 [2024-11-19 20:18:36.480331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:02.934 [2024-11-19 20:18:36.480408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.934 [2024-11-19 20:18:36.480413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:02.934 [2024-11-19 20:18:36.480459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.934 [2024-11-19 20:18:36.480472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:02.934 [2024-11-19 20:18:36.480553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.934 [2024-11-19 20:18:36.480558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:02.934 [2024-11-19 20:18:36.480595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.934 [2024-11-19 20:18:36.480602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.934 [2024-11-19 20:18:36.480630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.934 [2024-11-19 20:18:36.480636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:02.934 [2024-11-19 20:18:36.480642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.935 [2024-11-19 20:18:36.480648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.935 [2024-11-19 20:18:36.480680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.935 [2024-11-19 20:18:36.480687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:02.935 [2024-11-19 20:18:36.480693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.935 [2024-11-19 20:18:36.480700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.935 [2024-11-19 20:18:36.480790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.265 ms, result 0 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:03.504 Remove shared memory files 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81127 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:03.504 00:29:03.504 real 1m25.843s 00:29:03.504 user 1m56.728s 00:29:03.504 sys 0m19.720s 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.504 20:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.504 ************************************ 00:29:03.504 END TEST ftl_upgrade_shutdown 00:29:03.504 ************************************ 00:29:03.504 20:18:37 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:03.504 20:18:37 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:03.505 20:18:37 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:03.505 20:18:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.505 20:18:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:03.505 ************************************ 00:29:03.505 START TEST ftl_restore_fast 00:29:03.505 ************************************ 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:03.505 * Looking for test storage... 00:29:03.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.505 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.765 --rc genhtml_branch_coverage=1 00:29:03.765 --rc genhtml_function_coverage=1 00:29:03.765 --rc genhtml_legend=1 00:29:03.765 --rc geninfo_all_blocks=1 00:29:03.765 --rc geninfo_unexecuted_blocks=1 00:29:03.765 00:29:03.765 ' 00:29:03.765 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.765 --rc genhtml_branch_coverage=1 00:29:03.765 --rc genhtml_function_coverage=1 00:29:03.765 --rc genhtml_legend=1 00:29:03.765 --rc geninfo_all_blocks=1 00:29:03.765 --rc geninfo_unexecuted_blocks=1 00:29:03.765 00:29:03.765 ' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.766 --rc genhtml_branch_coverage=1 00:29:03.766 --rc genhtml_function_coverage=1 00:29:03.766 --rc genhtml_legend=1 00:29:03.766 --rc geninfo_all_blocks=1 00:29:03.766 --rc geninfo_unexecuted_blocks=1 00:29:03.766 00:29:03.766 ' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.766 --rc genhtml_branch_coverage=1 00:29:03.766 --rc genhtml_function_coverage=1 00:29:03.766 --rc genhtml_legend=1 00:29:03.766 --rc geninfo_all_blocks=1 00:29:03.766 --rc geninfo_unexecuted_blocks=1 00:29:03.766 00:29:03.766 ' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Lii7Ld7L5e 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81615 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81615 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81615 ']' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:03.766 20:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.766 [2024-11-19 20:18:37.411294] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:03.766 [2024-11-19 20:18:37.411411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81615 ] 00:29:04.025 [2024-11-19 20:18:37.566629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.025 [2024-11-19 20:18:37.641905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:04.592 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:04.850 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:05.108 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:05.108 { 00:29:05.108 "name": "nvme0n1", 00:29:05.108 "aliases": [ 00:29:05.108 "101d28b9-63d9-4213-8cb1-392ec9a82f95" 00:29:05.108 ], 00:29:05.108 "product_name": "NVMe disk", 00:29:05.108 "block_size": 4096, 00:29:05.108 "num_blocks": 1310720, 00:29:05.108 "uuid": "101d28b9-63d9-4213-8cb1-392ec9a82f95", 00:29:05.108 "numa_id": -1, 00:29:05.108 "assigned_rate_limits": { 00:29:05.108 "rw_ios_per_sec": 0, 00:29:05.108 "rw_mbytes_per_sec": 0, 00:29:05.108 "r_mbytes_per_sec": 0, 00:29:05.108 "w_mbytes_per_sec": 0 00:29:05.108 }, 00:29:05.108 "claimed": true, 00:29:05.108 "claim_type": "read_many_write_one", 00:29:05.108 "zoned": false, 00:29:05.108 "supported_io_types": { 00:29:05.108 "read": true, 00:29:05.108 "write": true, 00:29:05.108 "unmap": true, 00:29:05.108 "flush": true, 00:29:05.108 "reset": true, 00:29:05.108 "nvme_admin": true, 00:29:05.108 "nvme_io": true, 00:29:05.108 "nvme_io_md": false, 00:29:05.108 "write_zeroes": true, 00:29:05.108 "zcopy": false, 00:29:05.108 "get_zone_info": false, 00:29:05.108 "zone_management": false, 00:29:05.108 "zone_append": false, 00:29:05.108 "compare": true, 00:29:05.108 "compare_and_write": false, 00:29:05.108 "abort": true, 00:29:05.108 "seek_hole": false, 00:29:05.108 "seek_data": false, 00:29:05.108 "copy": true, 00:29:05.108 "nvme_iov_md": false 00:29:05.108 }, 00:29:05.108 "driver_specific": { 00:29:05.108 "nvme": [ 00:29:05.108 { 00:29:05.108 "pci_address": "0000:00:11.0", 00:29:05.108 "trid": { 00:29:05.108 "trtype": "PCIe", 00:29:05.108 "traddr": "0000:00:11.0" 00:29:05.108 }, 00:29:05.108 "ctrlr_data": { 00:29:05.108 "cntlid": 0, 00:29:05.108 "vendor_id": "0x1b36", 00:29:05.108 "model_number": "QEMU NVMe Ctrl", 00:29:05.108 "serial_number": "12341", 00:29:05.108 "firmware_revision": "8.0.0", 00:29:05.108 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:05.108 "oacs": { 00:29:05.108 "security": 0, 00:29:05.108 "format": 1, 00:29:05.108 "firmware": 0, 00:29:05.109 "ns_manage": 1 00:29:05.109 }, 00:29:05.109 "multi_ctrlr": false, 00:29:05.109 "ana_reporting": false 00:29:05.109 }, 00:29:05.109 "vs": { 00:29:05.109 "nvme_version": "1.4" 00:29:05.109 }, 00:29:05.109 "ns_data": { 00:29:05.109 "id": 1, 00:29:05.109 "can_share": false 00:29:05.109 } 00:29:05.109 } 00:29:05.109 ], 00:29:05.109 "mp_policy": "active_passive" 00:29:05.109 } 00:29:05.109 } 00:29:05.109 ]' 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:05.109 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:05.367 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=2431ca7d-9570-4b86-94ff-c25849897066 00:29:05.367 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:05.367 20:18:38 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2431ca7d-9570-4b86-94ff-c25849897066 00:29:05.627 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:05.627 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=98159960-1a8a-4f71-a55c-a356310e369d 00:29:05.627 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 98159960-1a8a-4f71-a55c-a356310e369d 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:05.886 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:06.145 { 00:29:06.145 "name": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:06.145 "aliases": [ 00:29:06.145 "lvs/nvme0n1p0" 00:29:06.145 ], 00:29:06.145 "product_name": "Logical Volume", 00:29:06.145 "block_size": 4096, 00:29:06.145 "num_blocks": 26476544, 00:29:06.145 "uuid": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:06.145 "assigned_rate_limits": { 00:29:06.145 "rw_ios_per_sec": 0, 00:29:06.145 "rw_mbytes_per_sec": 0, 00:29:06.145 "r_mbytes_per_sec": 0, 00:29:06.145 "w_mbytes_per_sec": 0 00:29:06.145 }, 00:29:06.145 "claimed": false, 00:29:06.145 "zoned": false, 00:29:06.145 "supported_io_types": { 00:29:06.145 "read": true, 00:29:06.145 "write": true, 00:29:06.145 "unmap": true, 00:29:06.145 "flush": false, 00:29:06.145 "reset": true, 00:29:06.145 "nvme_admin": false, 00:29:06.145 "nvme_io": false, 00:29:06.145 "nvme_io_md": false, 00:29:06.145 "write_zeroes": true, 00:29:06.145 "zcopy": false, 00:29:06.145 "get_zone_info": false, 00:29:06.145 "zone_management": false, 00:29:06.145 "zone_append": false, 00:29:06.145 "compare": false, 00:29:06.145 "compare_and_write": false, 00:29:06.145 "abort": false, 00:29:06.145 "seek_hole": true, 00:29:06.145 "seek_data": true, 00:29:06.145 "copy": false, 00:29:06.145 "nvme_iov_md": false 00:29:06.145 }, 00:29:06.145 "driver_specific": { 00:29:06.145 "lvol": { 00:29:06.145 "lvol_store_uuid": "98159960-1a8a-4f71-a55c-a356310e369d", 00:29:06.145 "base_bdev": "nvme0n1", 00:29:06.145 "thin_provision": true, 00:29:06.145 "num_allocated_clusters": 0, 00:29:06.145 "snapshot": false, 00:29:06.145 "clone": false, 00:29:06.145 "esnap_clone": false 00:29:06.145 } 00:29:06.145 } 00:29:06.145 } 00:29:06.145 ]' 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:06.145 20:18:39 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:06.404 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:06.663 { 00:29:06.663 "name": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:06.663 "aliases": [ 00:29:06.663 "lvs/nvme0n1p0" 00:29:06.663 ], 00:29:06.663 "product_name": "Logical Volume", 00:29:06.663 "block_size": 4096, 00:29:06.663 "num_blocks": 26476544, 00:29:06.663 "uuid": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:06.663 "assigned_rate_limits": { 00:29:06.663 "rw_ios_per_sec": 0, 00:29:06.663 "rw_mbytes_per_sec": 0, 00:29:06.663 "r_mbytes_per_sec": 0, 00:29:06.663 "w_mbytes_per_sec": 0 00:29:06.663 }, 00:29:06.663 "claimed": false, 00:29:06.663 "zoned": false, 00:29:06.663 "supported_io_types": { 00:29:06.663 "read": true, 00:29:06.663 "write": true, 00:29:06.663 "unmap": true, 00:29:06.663 "flush": false, 00:29:06.663 "reset": true, 00:29:06.663 "nvme_admin": false, 00:29:06.663 "nvme_io": false, 00:29:06.663 "nvme_io_md": false, 00:29:06.663 "write_zeroes": true, 00:29:06.663 "zcopy": false, 00:29:06.663 "get_zone_info": false, 00:29:06.663 "zone_management": false, 00:29:06.663 "zone_append": false, 00:29:06.663 "compare": false, 00:29:06.663 "compare_and_write": false, 00:29:06.663 "abort": false, 00:29:06.663 "seek_hole": true, 00:29:06.663 "seek_data": true, 00:29:06.663 "copy": false, 00:29:06.663 "nvme_iov_md": false 00:29:06.663 }, 00:29:06.663 "driver_specific": { 00:29:06.663 "lvol": { 00:29:06.663 "lvol_store_uuid": "98159960-1a8a-4f71-a55c-a356310e369d", 00:29:06.663 "base_bdev": "nvme0n1", 00:29:06.663 "thin_provision": true, 00:29:06.663 "num_allocated_clusters": 0, 00:29:06.663 "snapshot": false, 00:29:06.663 "clone": false, 00:29:06.663 "esnap_clone": false 00:29:06.663 } 00:29:06.663 } 00:29:06.663 } 00:29:06.663 ]' 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:06.663 20:18:40 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:06.922 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c10cdb96-1093-4d6f-b81f-2cf2b3097a5e 00:29:07.180 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:07.180 { 00:29:07.180 "name": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:07.180 "aliases": [ 00:29:07.180 "lvs/nvme0n1p0" 00:29:07.180 ], 00:29:07.180 "product_name": "Logical Volume", 00:29:07.180 "block_size": 4096, 00:29:07.180 "num_blocks": 26476544, 00:29:07.180 "uuid": "c10cdb96-1093-4d6f-b81f-2cf2b3097a5e", 00:29:07.180 "assigned_rate_limits": { 00:29:07.180 "rw_ios_per_sec": 0, 00:29:07.180 "rw_mbytes_per_sec": 0, 00:29:07.180 "r_mbytes_per_sec": 0, 00:29:07.180 "w_mbytes_per_sec": 0 00:29:07.180 }, 00:29:07.180 "claimed": false, 00:29:07.180 "zoned": false, 00:29:07.180 "supported_io_types": { 00:29:07.180 "read": true, 00:29:07.180 "write": true, 00:29:07.180 "unmap": true, 00:29:07.180 "flush": false, 00:29:07.180 "reset": true, 00:29:07.180 "nvme_admin": false, 00:29:07.180 "nvme_io": false, 00:29:07.180 "nvme_io_md": false, 00:29:07.180 "write_zeroes": true, 00:29:07.180 "zcopy": false, 00:29:07.180 "get_zone_info": false, 00:29:07.180 "zone_management": false, 00:29:07.180 "zone_append": false, 00:29:07.180 "compare": false, 00:29:07.180 "compare_and_write": false, 00:29:07.180 "abort": false, 00:29:07.180 "seek_hole": true, 00:29:07.180 "seek_data": true, 00:29:07.180 "copy": false, 00:29:07.181 "nvme_iov_md": false 00:29:07.181 }, 00:29:07.181 "driver_specific": { 00:29:07.181 "lvol": { 00:29:07.181 "lvol_store_uuid": "98159960-1a8a-4f71-a55c-a356310e369d", 00:29:07.181 "base_bdev": "nvme0n1", 00:29:07.181 "thin_provision": true, 00:29:07.181 "num_allocated_clusters": 0, 00:29:07.181 "snapshot": false, 00:29:07.181 "clone": false, 00:29:07.181 "esnap_clone": false 00:29:07.181 } 00:29:07.181 } 00:29:07.181 } 00:29:07.181 ]' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c10cdb96-1093-4d6f-b81f-2cf2b3097a5e --l2p_dram_limit 10' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:07.181 20:18:40 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c10cdb96-1093-4d6f-b81f-2cf2b3097a5e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:07.440 [2024-11-19 20:18:40.984032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.984066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:07.440 [2024-11-19 20:18:40.984077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:07.440 [2024-11-19 20:18:40.984084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.984125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.984133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:07.440 [2024-11-19 20:18:40.984141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:07.440 [2024-11-19 20:18:40.984147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.984165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:07.440 [2024-11-19 20:18:40.984738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:07.440 [2024-11-19 20:18:40.984754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.984761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:07.440 [2024-11-19 20:18:40.984769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:29:07.440 [2024-11-19 20:18:40.984774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.984800] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:29:07.440 [2024-11-19 20:18:40.985721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.985743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:07.440 [2024-11-19 20:18:40.985751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:07.440 [2024-11-19 20:18:40.985759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.990405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.990430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:07.440 [2024-11-19 20:18:40.990439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:29:07.440 [2024-11-19 20:18:40.990446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.990512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.990520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:07.440 [2024-11-19 20:18:40.990527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:07.440 [2024-11-19 20:18:40.990536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.990576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.990584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:07.440 [2024-11-19 20:18:40.990590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:07.440 [2024-11-19 20:18:40.990599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.990616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:07.440 [2024-11-19 20:18:40.993457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.993480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:07.440 [2024-11-19 20:18:40.993488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:29:07.440 [2024-11-19 20:18:40.993494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.993520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.993526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:07.440 [2024-11-19 20:18:40.993533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:07.440 [2024-11-19 20:18:40.993539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.993552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:07.440 [2024-11-19 20:18:40.993656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:07.440 [2024-11-19 20:18:40.993672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:07.440 [2024-11-19 20:18:40.993681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:07.440 [2024-11-19 20:18:40.993689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:07.440 [2024-11-19 20:18:40.993696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:07.440 [2024-11-19 20:18:40.993704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:07.440 [2024-11-19 20:18:40.993710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:07.440 [2024-11-19 20:18:40.993718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:07.440 [2024-11-19 20:18:40.993723] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:07.440 [2024-11-19 20:18:40.993731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.993736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:07.440 [2024-11-19 20:18:40.993743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:29:07.440 [2024-11-19 20:18:40.993754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.440 [2024-11-19 20:18:40.993820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.440 [2024-11-19 20:18:40.993826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:07.441 [2024-11-19 20:18:40.993834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:07.441 [2024-11-19 20:18:40.993839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.441 [2024-11-19 20:18:40.993917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:07.441 [2024-11-19 20:18:40.993924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:07.441 [2024-11-19 20:18:40.993932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:07.441 [2024-11-19 20:18:40.993938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.993945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:07.441 [2024-11-19 20:18:40.993950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.993956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:07.441 [2024-11-19 20:18:40.993961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:07.441 [2024-11-19 20:18:40.993967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:07.441 [2024-11-19 20:18:40.993972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:07.441 [2024-11-19 20:18:40.993978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:07.441 [2024-11-19 20:18:40.993983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:07.441 [2024-11-19 20:18:40.993990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:07.441 [2024-11-19 20:18:40.993995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:07.441 [2024-11-19 20:18:40.994002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:07.441 [2024-11-19 20:18:40.994007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:07.441 [2024-11-19 20:18:40.994021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:07.441 [2024-11-19 20:18:40.994040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:07.441 [2024-11-19 20:18:40.994056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:07.441 [2024-11-19 20:18:40.994073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:07.441 [2024-11-19 20:18:40.994090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:07.441 [2024-11-19 20:18:40.994108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:07.441 [2024-11-19 20:18:40.994119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:07.441 [2024-11-19 20:18:40.994124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:07.441 [2024-11-19 20:18:40.994130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:07.441 [2024-11-19 20:18:40.994135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:07.441 [2024-11-19 20:18:40.994141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:07.441 [2024-11-19 20:18:40.994145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:07.441 [2024-11-19 20:18:40.994156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:07.441 [2024-11-19 20:18:40.994162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:07.441 [2024-11-19 20:18:40.994175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:07.441 [2024-11-19 20:18:40.994181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:07.441 [2024-11-19 20:18:40.994194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:07.441 [2024-11-19 20:18:40.994202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:07.441 [2024-11-19 20:18:40.994207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:07.441 [2024-11-19 20:18:40.994213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:07.441 [2024-11-19 20:18:40.994232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:07.441 [2024-11-19 20:18:40.994239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:07.441 [2024-11-19 20:18:40.994246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:07.441 [2024-11-19 20:18:40.994256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:07.441 [2024-11-19 20:18:40.994271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:07.441 [2024-11-19 20:18:40.994277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:07.441 [2024-11-19 20:18:40.994284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:07.441 [2024-11-19 20:18:40.994290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:07.441 [2024-11-19 20:18:40.994296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:07.441 [2024-11-19 20:18:40.994302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:07.441 [2024-11-19 20:18:40.994308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:07.441 [2024-11-19 20:18:40.994314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:07.441 [2024-11-19 20:18:40.994324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:07.441 [2024-11-19 20:18:40.994355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:07.441 [2024-11-19 20:18:40.994362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:07.441 [2024-11-19 20:18:40.994375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:07.441 [2024-11-19 20:18:40.994381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:07.441 [2024-11-19 20:18:40.994387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:07.441 [2024-11-19 20:18:40.994393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.441 [2024-11-19 20:18:40.994400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:07.441 [2024-11-19 20:18:40.994406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:29:07.441 [2024-11-19 20:18:40.994412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.441 [2024-11-19 20:18:40.994443] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:07.441 [2024-11-19 20:18:40.994453] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:10.742 [2024-11-19 20:18:44.465652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.742 [2024-11-19 20:18:44.465739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:10.742 [2024-11-19 20:18:44.465757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3471.191 ms 00:29:10.742 [2024-11-19 20:18:44.465770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.742 [2024-11-19 20:18:44.496893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.742 [2024-11-19 20:18:44.496951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:10.742 [2024-11-19 20:18:44.496966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.899 ms 00:29:10.742 [2024-11-19 20:18:44.496977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.742 [2024-11-19 20:18:44.497117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.742 [2024-11-19 20:18:44.497131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:10.742 [2024-11-19 20:18:44.497141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:10.742 [2024-11-19 20:18:44.497155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.742 [2024-11-19 20:18:44.532417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.742 [2024-11-19 20:18:44.532459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:10.742 [2024-11-19 20:18:44.532470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.206 ms 00:29:10.742 [2024-11-19 20:18:44.532481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.742 [2024-11-19 20:18:44.532516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.742 [2024-11-19 20:18:44.532531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:10.742 [2024-11-19 20:18:44.532539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:10.742 [2024-11-19 20:18:44.532550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.001 [2024-11-19 20:18:44.533135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.001 [2024-11-19 20:18:44.533173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.001 [2024-11-19 20:18:44.533185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:29:11.001 [2024-11-19 20:18:44.533195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.001 [2024-11-19 20:18:44.533330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.001 [2024-11-19 20:18:44.533342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.001 [2024-11-19 20:18:44.533354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:11.001 [2024-11-19 20:18:44.533368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.001 [2024-11-19 20:18:44.550568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.550609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.002 [2024-11-19 20:18:44.550620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.178 ms 00:29:11.002 [2024-11-19 20:18:44.550631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.563576] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:11.002 [2024-11-19 20:18:44.567309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.567343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:11.002 [2024-11-19 20:18:44.567355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.583 ms 00:29:11.002 [2024-11-19 20:18:44.567363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.653664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.653705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:11.002 [2024-11-19 20:18:44.653721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.267 ms 00:29:11.002 [2024-11-19 20:18:44.653730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.653909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.653923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:11.002 [2024-11-19 20:18:44.653935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:29:11.002 [2024-11-19 20:18:44.653943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.677591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.677619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:11.002 [2024-11-19 20:18:44.677633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.601 ms 00:29:11.002 [2024-11-19 20:18:44.677641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.700843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.700873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:11.002 [2024-11-19 20:18:44.700886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:29:11.002 [2024-11-19 20:18:44.700893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.701476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.701492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:11.002 [2024-11-19 20:18:44.701502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:29:11.002 [2024-11-19 20:18:44.701510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.002 [2024-11-19 20:18:44.783764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.002 [2024-11-19 20:18:44.783807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:11.002 [2024-11-19 20:18:44.783826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.200 ms 00:29:11.002 [2024-11-19 20:18:44.783835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.809188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.263 [2024-11-19 20:18:44.809239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:11.263 [2024-11-19 20:18:44.809254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.288 ms 00:29:11.263 [2024-11-19 20:18:44.809263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.834421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.263 [2024-11-19 20:18:44.834459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:11.263 [2024-11-19 20:18:44.834473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.126 ms 00:29:11.263 [2024-11-19 20:18:44.834480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.860486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.263 [2024-11-19 20:18:44.860527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:11.263 [2024-11-19 20:18:44.860541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.974 ms 00:29:11.263 [2024-11-19 20:18:44.860550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.860583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.263 [2024-11-19 20:18:44.860592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:11.263 [2024-11-19 20:18:44.860607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:11.263 [2024-11-19 20:18:44.860615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.860712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.263 [2024-11-19 20:18:44.860723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:11.263 [2024-11-19 20:18:44.860737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:11.263 [2024-11-19 20:18:44.860745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.263 [2024-11-19 20:18:44.861891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3877.358 ms, result 0 00:29:11.263 { 00:29:11.263 "name": "ftl0", 00:29:11.263 "uuid": "6cd190ad-34d2-462b-b0b7-b02d87ff5233" 00:29:11.263 } 00:29:11.263 20:18:44 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:11.263 20:18:44 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:11.524 20:18:45 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:11.524 20:18:45 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:11.524 [2024-11-19 20:18:45.313291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.524 [2024-11-19 20:18:45.313341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.524 [2024-11-19 20:18:45.313356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:11.524 [2024-11-19 20:18:45.313373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.524 [2024-11-19 20:18:45.313398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.786 [2024-11-19 20:18:45.316496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.786 [2024-11-19 20:18:45.316531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.786 [2024-11-19 20:18:45.316545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:29:11.786 [2024-11-19 20:18:45.316554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.786 [2024-11-19 20:18:45.316832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.786 [2024-11-19 20:18:45.316843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.786 [2024-11-19 20:18:45.316857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:29:11.786 [2024-11-19 20:18:45.316866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.786 [2024-11-19 20:18:45.320113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.786 [2024-11-19 20:18:45.320131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:11.786 [2024-11-19 20:18:45.320143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:29:11.786 [2024-11-19 20:18:45.320152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.786 [2024-11-19 20:18:45.326439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.786 [2024-11-19 20:18:45.326473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:11.786 [2024-11-19 20:18:45.326490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:29:11.786 [2024-11-19 20:18:45.326498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.786 [2024-11-19 20:18:45.352695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.786 [2024-11-19 20:18:45.352734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:11.786 [2024-11-19 20:18:45.352750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.115 ms 00:29:11.786 [2024-11-19 20:18:45.352757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.786 [2024-11-19 20:18:45.370022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.370066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:11.787 [2024-11-19 20:18:45.370083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.208 ms 00:29:11.787 [2024-11-19 20:18:45.370092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.370275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.370289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:11.787 [2024-11-19 20:18:45.370301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:29:11.787 [2024-11-19 20:18:45.370309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.395525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.395561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:11.787 [2024-11-19 20:18:45.395575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.191 ms 00:29:11.787 [2024-11-19 20:18:45.395582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.420537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.420575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:11.787 [2024-11-19 20:18:45.420589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.901 ms 00:29:11.787 [2024-11-19 20:18:45.420596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.445018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.445055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:11.787 [2024-11-19 20:18:45.445068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.368 ms 00:29:11.787 [2024-11-19 20:18:45.445075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.469755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.787 [2024-11-19 20:18:45.469793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:11.787 [2024-11-19 20:18:45.469807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.583 ms 00:29:11.787 [2024-11-19 20:18:45.469814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.787 [2024-11-19 20:18:45.469861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.787 [2024-11-19 20:18:45.469877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.469999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.787 [2024-11-19 20:18:45.470516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.788 [2024-11-19 20:18:45.470831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.788 [2024-11-19 20:18:45.470844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:29:11.788 [2024-11-19 20:18:45.470852] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:11.788 [2024-11-19 20:18:45.470863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:11.788 [2024-11-19 20:18:45.470870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:11.788 [2024-11-19 20:18:45.470884] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:11.788 [2024-11-19 20:18:45.470890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.788 [2024-11-19 20:18:45.470900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.788 [2024-11-19 20:18:45.470907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.788 [2024-11-19 20:18:45.470916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.788 [2024-11-19 20:18:45.470922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.788 [2024-11-19 20:18:45.470932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.788 [2024-11-19 20:18:45.470939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.788 [2024-11-19 20:18:45.470950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:29:11.788 [2024-11-19 20:18:45.470958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.484249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.788 [2024-11-19 20:18:45.484282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.788 [2024-11-19 20:18:45.484295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:29:11.788 [2024-11-19 20:18:45.484303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.484707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.788 [2024-11-19 20:18:45.484725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.788 [2024-11-19 20:18:45.484737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:29:11.788 [2024-11-19 20:18:45.484747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.531175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.788 [2024-11-19 20:18:45.531215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.788 [2024-11-19 20:18:45.531247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.788 [2024-11-19 20:18:45.531256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.531329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.788 [2024-11-19 20:18:45.531338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.788 [2024-11-19 20:18:45.531348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.788 [2024-11-19 20:18:45.531359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.531455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.788 [2024-11-19 20:18:45.531466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.788 [2024-11-19 20:18:45.531476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.788 [2024-11-19 20:18:45.531484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.788 [2024-11-19 20:18:45.531506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.788 [2024-11-19 20:18:45.531514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.788 [2024-11-19 20:18:45.531523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.788 [2024-11-19 20:18:45.531531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.614957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.615004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.050 [2024-11-19 20:18:45.615019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.615027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.050 [2024-11-19 20:18:45.683152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.050 [2024-11-19 20:18:45.683288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.050 [2024-11-19 20:18:45.683394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.050 [2024-11-19 20:18:45.683526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:12.050 [2024-11-19 20:18:45.683596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.050 [2024-11-19 20:18:45.683671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.050 [2024-11-19 20:18:45.683742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.050 [2024-11-19 20:18:45.683752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.050 [2024-11-19 20:18:45.683760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.050 [2024-11-19 20:18:45.683907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.570 ms, result 0 00:29:12.050 true 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81615 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81615 ']' 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81615 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81615 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.050 killing process with pid 81615 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81615' 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81615 00:29:12.050 20:18:45 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81615 00:29:18.641 20:18:51 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:21.930 262144+0 records in 00:29:21.930 262144+0 records out 00:29:21.930 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.69416 s, 291 MB/s 00:29:21.930 20:18:55 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:23.304 20:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:23.304 [2024-11-19 20:18:57.085084] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:23.304 [2024-11-19 20:18:57.085295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81835 ] 00:29:23.564 [2024-11-19 20:18:57.240980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.564 [2024-11-19 20:18:57.348811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.139 [2024-11-19 20:18:57.637984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:24.139 [2024-11-19 20:18:57.638074] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:24.139 [2024-11-19 20:18:57.800507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.800574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:24.139 [2024-11-19 20:18:57.800596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:24.139 [2024-11-19 20:18:57.800605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.800662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.800673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.139 [2024-11-19 20:18:57.800685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:24.139 [2024-11-19 20:18:57.800693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.800714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:24.139 [2024-11-19 20:18:57.801467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:24.139 [2024-11-19 20:18:57.801489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.801498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.139 [2024-11-19 20:18:57.801507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:29:24.139 [2024-11-19 20:18:57.801515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.803315] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:24.139 [2024-11-19 20:18:57.818165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.818237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:24.139 [2024-11-19 20:18:57.818252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.852 ms 00:29:24.139 [2024-11-19 20:18:57.818261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.818349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.818359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:24.139 [2024-11-19 20:18:57.818369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:24.139 [2024-11-19 20:18:57.818377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.826846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.826895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.139 [2024-11-19 20:18:57.826906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.386 ms 00:29:24.139 [2024-11-19 20:18:57.826915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.827006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.827015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.139 [2024-11-19 20:18:57.827023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:24.139 [2024-11-19 20:18:57.827032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.827078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.827088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:24.139 [2024-11-19 20:18:57.827096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:24.139 [2024-11-19 20:18:57.827103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.827127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:24.139 [2024-11-19 20:18:57.831248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.831291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.139 [2024-11-19 20:18:57.831302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:29:24.139 [2024-11-19 20:18:57.831313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.831352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.831361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:24.139 [2024-11-19 20:18:57.831370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:24.139 [2024-11-19 20:18:57.831380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.831435] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:24.139 [2024-11-19 20:18:57.831459] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:24.139 [2024-11-19 20:18:57.831497] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:24.139 [2024-11-19 20:18:57.831517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:24.139 [2024-11-19 20:18:57.831624] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:24.139 [2024-11-19 20:18:57.831636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:24.139 [2024-11-19 20:18:57.831647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:24.139 [2024-11-19 20:18:57.831658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:24.139 [2024-11-19 20:18:57.831667] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:24.139 [2024-11-19 20:18:57.831675] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:24.139 [2024-11-19 20:18:57.831683] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:24.139 [2024-11-19 20:18:57.831691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:24.139 [2024-11-19 20:18:57.831698] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:24.139 [2024-11-19 20:18:57.831710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.831718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:24.139 [2024-11-19 20:18:57.831726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:29:24.139 [2024-11-19 20:18:57.831733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.831816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.139 [2024-11-19 20:18:57.831824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:24.139 [2024-11-19 20:18:57.831832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:24.139 [2024-11-19 20:18:57.831840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.139 [2024-11-19 20:18:57.831946] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:24.139 [2024-11-19 20:18:57.831967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:24.139 [2024-11-19 20:18:57.831976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:24.139 [2024-11-19 20:18:57.831984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.139 [2024-11-19 20:18:57.831992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:24.139 [2024-11-19 20:18:57.831998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:24.139 [2024-11-19 20:18:57.832006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:24.139 [2024-11-19 20:18:57.832013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:24.139 [2024-11-19 20:18:57.832020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:24.139 [2024-11-19 20:18:57.832026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:24.139 [2024-11-19 20:18:57.832033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:24.139 [2024-11-19 20:18:57.832043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:24.139 [2024-11-19 20:18:57.832050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:24.139 [2024-11-19 20:18:57.832057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:24.139 [2024-11-19 20:18:57.832065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:24.139 [2024-11-19 20:18:57.832078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.139 [2024-11-19 20:18:57.832085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:24.139 [2024-11-19 20:18:57.832092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:24.139 [2024-11-19 20:18:57.832098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:24.140 [2024-11-19 20:18:57.832113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:24.140 [2024-11-19 20:18:57.832134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:24.140 [2024-11-19 20:18:57.832154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:24.140 [2024-11-19 20:18:57.832174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:24.140 [2024-11-19 20:18:57.832194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:24.140 [2024-11-19 20:18:57.832206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:24.140 [2024-11-19 20:18:57.832213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:24.140 [2024-11-19 20:18:57.832219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:24.140 [2024-11-19 20:18:57.832254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:24.140 [2024-11-19 20:18:57.832262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:24.140 [2024-11-19 20:18:57.832268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:24.140 [2024-11-19 20:18:57.832283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:24.140 [2024-11-19 20:18:57.832290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832299] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:24.140 [2024-11-19 20:18:57.832308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:24.140 [2024-11-19 20:18:57.832316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.140 [2024-11-19 20:18:57.832332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:24.140 [2024-11-19 20:18:57.832339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:24.140 [2024-11-19 20:18:57.832346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:24.140 [2024-11-19 20:18:57.832353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:24.140 [2024-11-19 20:18:57.832360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:24.140 [2024-11-19 20:18:57.832368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:24.140 [2024-11-19 20:18:57.832376] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:24.140 [2024-11-19 20:18:57.832385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:24.140 [2024-11-19 20:18:57.832402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:24.140 [2024-11-19 20:18:57.832409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:24.140 [2024-11-19 20:18:57.832416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:24.140 [2024-11-19 20:18:57.832424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:24.140 [2024-11-19 20:18:57.832458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:24.140 [2024-11-19 20:18:57.832466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:24.140 [2024-11-19 20:18:57.832473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:24.140 [2024-11-19 20:18:57.832480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:24.140 [2024-11-19 20:18:57.832488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:24.140 [2024-11-19 20:18:57.832524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:24.140 [2024-11-19 20:18:57.832536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:24.140 [2024-11-19 20:18:57.832552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:24.140 [2024-11-19 20:18:57.832558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:24.140 [2024-11-19 20:18:57.832565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:24.140 [2024-11-19 20:18:57.832574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.832583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:24.140 [2024-11-19 20:18:57.832591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:29:24.140 [2024-11-19 20:18:57.832598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.865161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.865238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.140 [2024-11-19 20:18:57.865252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.518 ms 00:29:24.140 [2024-11-19 20:18:57.865260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.865352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.865362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:24.140 [2024-11-19 20:18:57.865372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:24.140 [2024-11-19 20:18:57.865380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.911079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.911325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.140 [2024-11-19 20:18:57.911349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.636 ms 00:29:24.140 [2024-11-19 20:18:57.911359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.911410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.911420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.140 [2024-11-19 20:18:57.911430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:24.140 [2024-11-19 20:18:57.911443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.912043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.912080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.140 [2024-11-19 20:18:57.912092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:29:24.140 [2024-11-19 20:18:57.912100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.912281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.912300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.140 [2024-11-19 20:18:57.912309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:29:24.140 [2024-11-19 20:18:57.912324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.140 [2024-11-19 20:18:57.928686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.140 [2024-11-19 20:18:57.928735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.140 [2024-11-19 20:18:57.928750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.340 ms 00:29:24.140 [2024-11-19 20:18:57.928759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:57.943183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:24.402 [2024-11-19 20:18:57.943396] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:24.402 [2024-11-19 20:18:57.943417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:57.943425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:24.402 [2024-11-19 20:18:57.943436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.545 ms 00:29:24.402 [2024-11-19 20:18:57.943445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:57.969434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:57.969500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:24.402 [2024-11-19 20:18:57.969522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.941 ms 00:29:24.402 [2024-11-19 20:18:57.969530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:57.982969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:57.983163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:24.402 [2024-11-19 20:18:57.983184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.379 ms 00:29:24.402 [2024-11-19 20:18:57.983191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:57.996402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:57.996450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:24.402 [2024-11-19 20:18:57.996463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.149 ms 00:29:24.402 [2024-11-19 20:18:57.996471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:57.997129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:57.997166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:24.402 [2024-11-19 20:18:57.997178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:29:24.402 [2024-11-19 20:18:57.997187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.064886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:58.064944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:24.402 [2024-11-19 20:18:58.064961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.675 ms 00:29:24.402 [2024-11-19 20:18:58.064978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.076564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:24.402 [2024-11-19 20:18:58.080120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:58.080168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:24.402 [2024-11-19 20:18:58.080182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.080 ms 00:29:24.402 [2024-11-19 20:18:58.080191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.080347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:58.080361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:24.402 [2024-11-19 20:18:58.080372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:24.402 [2024-11-19 20:18:58.080381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.080458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:58.080469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:24.402 [2024-11-19 20:18:58.080479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:24.402 [2024-11-19 20:18:58.080487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.080508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.402 [2024-11-19 20:18:58.080519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:24.402 [2024-11-19 20:18:58.080527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:24.402 [2024-11-19 20:18:58.080535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.402 [2024-11-19 20:18:58.080572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:24.403 [2024-11-19 20:18:58.080583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.403 [2024-11-19 20:18:58.080594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:24.403 [2024-11-19 20:18:58.080602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:24.403 [2024-11-19 20:18:58.080611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.403 [2024-11-19 20:18:58.106974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.403 [2024-11-19 20:18:58.107025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:24.403 [2024-11-19 20:18:58.107039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.341 ms 00:29:24.403 [2024-11-19 20:18:58.107048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.403 [2024-11-19 20:18:58.107145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.403 [2024-11-19 20:18:58.107156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:24.403 [2024-11-19 20:18:58.107166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:24.403 [2024-11-19 20:18:58.107175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.403 [2024-11-19 20:18:58.109049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.044 ms, result 0 00:29:25.346  [2024-11-19T20:19:00.525Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-19T20:19:01.469Z] Copying: 27/1024 [MB] (12 MBps) [2024-11-19T20:19:02.415Z] Copying: 46/1024 [MB] (19 MBps) [2024-11-19T20:19:03.357Z] Copying: 66/1024 [MB] (19 MBps) [2024-11-19T20:19:04.387Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-19T20:19:05.332Z] Copying: 94/1024 [MB] (13 MBps) [2024-11-19T20:19:06.267Z] Copying: 133/1024 [MB] (38 MBps) [2024-11-19T20:19:07.201Z] Copying: 177/1024 [MB] (44 MBps) [2024-11-19T20:19:08.142Z] Copying: 214/1024 [MB] (36 MBps) [2024-11-19T20:19:09.530Z] Copying: 244/1024 [MB] (30 MBps) [2024-11-19T20:19:10.472Z] Copying: 258/1024 [MB] (13 MBps) [2024-11-19T20:19:11.407Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-19T20:19:12.350Z] Copying: 306/1024 [MB] (30 MBps) [2024-11-19T20:19:13.300Z] Copying: 326/1024 [MB] (19 MBps) [2024-11-19T20:19:14.244Z] Copying: 344/1024 [MB] (18 MBps) [2024-11-19T20:19:15.183Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-19T20:19:16.129Z] Copying: 381/1024 [MB] (21 MBps) [2024-11-19T20:19:17.507Z] Copying: 399/1024 [MB] (17 MBps) [2024-11-19T20:19:18.450Z] Copying: 438/1024 [MB] (39 MBps) [2024-11-19T20:19:19.385Z] Copying: 457/1024 [MB] (19 MBps) [2024-11-19T20:19:20.326Z] Copying: 487/1024 [MB] (29 MBps) [2024-11-19T20:19:21.270Z] Copying: 504/1024 [MB] (16 MBps) [2024-11-19T20:19:22.206Z] Copying: 518/1024 [MB] (14 MBps) [2024-11-19T20:19:23.145Z] Copying: 547/1024 [MB] (29 MBps) [2024-11-19T20:19:24.524Z] Copying: 564/1024 [MB] (16 MBps) [2024-11-19T20:19:25.463Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-19T20:19:26.404Z] Copying: 590/1024 [MB] (14 MBps) [2024-11-19T20:19:27.347Z] Copying: 609/1024 [MB] (19 MBps) [2024-11-19T20:19:28.290Z] Copying: 623/1024 [MB] (14 MBps) [2024-11-19T20:19:29.236Z] Copying: 639/1024 [MB] (15 MBps) [2024-11-19T20:19:30.180Z] Copying: 656/1024 [MB] (17 MBps) [2024-11-19T20:19:31.126Z] Copying: 671/1024 [MB] (14 MBps) [2024-11-19T20:19:32.514Z] Copying: 689/1024 [MB] (17 MBps) [2024-11-19T20:19:33.459Z] Copying: 700/1024 [MB] (10 MBps) [2024-11-19T20:19:34.392Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-19T20:19:35.451Z] Copying: 733/1024 [MB] (23 MBps) [2024-11-19T20:19:36.393Z] Copying: 757/1024 [MB] (23 MBps) [2024-11-19T20:19:37.334Z] Copying: 780/1024 [MB] (22 MBps) [2024-11-19T20:19:38.276Z] Copying: 794/1024 [MB] (14 MBps) [2024-11-19T20:19:39.208Z] Copying: 809/1024 [MB] (15 MBps) [2024-11-19T20:19:40.141Z] Copying: 849/1024 [MB] (39 MBps) [2024-11-19T20:19:41.526Z] Copying: 885/1024 [MB] (35 MBps) [2024-11-19T20:19:42.469Z] Copying: 904/1024 [MB] (19 MBps) [2024-11-19T20:19:43.411Z] Copying: 923/1024 [MB] (19 MBps) [2024-11-19T20:19:44.354Z] Copying: 936/1024 [MB] (13 MBps) [2024-11-19T20:19:45.287Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-19T20:19:46.221Z] Copying: 970/1024 [MB] (22 MBps) [2024-11-19T20:19:46.480Z] Copying: 1010/1024 [MB] (40 MBps) [2024-11-19T20:19:46.480Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-19 20:19:46.454778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.686 [2024-11-19 20:19:46.454811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:12.686 [2024-11-19 20:19:46.454821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:12.686 [2024-11-19 20:19:46.454831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.686 [2024-11-19 20:19:46.454847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:12.686 [2024-11-19 20:19:46.456923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.686 [2024-11-19 20:19:46.456949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:12.686 [2024-11-19 20:19:46.456958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:30:12.686 [2024-11-19 20:19:46.456964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.686 [2024-11-19 20:19:46.458790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.686 [2024-11-19 20:19:46.458816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:12.687 [2024-11-19 20:19:46.458823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:30:12.687 [2024-11-19 20:19:46.458829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.687 [2024-11-19 20:19:46.458848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.687 [2024-11-19 20:19:46.458855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:12.687 [2024-11-19 20:19:46.458862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:12.687 [2024-11-19 20:19:46.458867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.687 [2024-11-19 20:19:46.458904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.687 [2024-11-19 20:19:46.458912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:12.687 [2024-11-19 20:19:46.458919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:12.687 [2024-11-19 20:19:46.458925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.687 [2024-11-19 20:19:46.458934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:12.687 [2024-11-19 20:19:46.458944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.458997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:12.687 [2024-11-19 20:19:46.459516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:12.688 [2024-11-19 20:19:46.459649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:12.688 [2024-11-19 20:19:46.459655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:30:12.688 [2024-11-19 20:19:46.459660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:12.688 [2024-11-19 20:19:46.459666] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:12.688 [2024-11-19 20:19:46.459671] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:12.688 [2024-11-19 20:19:46.459677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:12.688 [2024-11-19 20:19:46.459684] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:12.688 [2024-11-19 20:19:46.459690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:12.688 [2024-11-19 20:19:46.459695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:12.688 [2024-11-19 20:19:46.459701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:12.688 [2024-11-19 20:19:46.459706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:12.688 [2024-11-19 20:19:46.459711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.688 [2024-11-19 20:19:46.459717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:12.688 [2024-11-19 20:19:46.459723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:30:12.688 [2024-11-19 20:19:46.459728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.688 [2024-11-19 20:19:46.469459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.688 [2024-11-19 20:19:46.469483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:12.688 [2024-11-19 20:19:46.469495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.720 ms 00:30:12.688 [2024-11-19 20:19:46.469501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.688 [2024-11-19 20:19:46.469761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.688 [2024-11-19 20:19:46.469767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:12.688 [2024-11-19 20:19:46.469773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:30:12.688 [2024-11-19 20:19:46.469779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.495201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.495239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:12.949 [2024-11-19 20:19:46.495246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.495252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.495292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.495298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:12.949 [2024-11-19 20:19:46.495305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.495310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.495345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.495352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:12.949 [2024-11-19 20:19:46.495361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.495366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.495377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.495383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:12.949 [2024-11-19 20:19:46.495388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.495396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.554601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.554632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:12.949 [2024-11-19 20:19:46.554645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.554651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:12.949 [2024-11-19 20:19:46.603669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:12.949 [2024-11-19 20:19:46.603746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:12.949 [2024-11-19 20:19:46.603792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:12.949 [2024-11-19 20:19:46.603864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:12.949 [2024-11-19 20:19:46.603907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:12.949 [2024-11-19 20:19:46.603952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.603958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.603992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:12.949 [2024-11-19 20:19:46.603999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:12.949 [2024-11-19 20:19:46.604006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:12.949 [2024-11-19 20:19:46.604012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.949 [2024-11-19 20:19:46.604100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.298 ms, result 0 00:30:13.893 00:30:13.893 00:30:14.155 20:19:47 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:14.155 [2024-11-19 20:19:47.771759] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:14.155 [2024-11-19 20:19:47.772144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82338 ] 00:30:14.155 [2024-11-19 20:19:47.937408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.416 [2024-11-19 20:19:48.057343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.677 [2024-11-19 20:19:48.352578] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:14.677 [2024-11-19 20:19:48.352657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:14.938 [2024-11-19 20:19:48.514949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.515014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:14.938 [2024-11-19 20:19:48.515036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:14.938 [2024-11-19 20:19:48.515046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.515103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.515114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:14.938 [2024-11-19 20:19:48.515125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:14.938 [2024-11-19 20:19:48.515134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.515154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:14.938 [2024-11-19 20:19:48.515885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:14.938 [2024-11-19 20:19:48.515914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.515924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:14.938 [2024-11-19 20:19:48.515934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:30:14.938 [2024-11-19 20:19:48.515942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516333] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:14.938 [2024-11-19 20:19:48.516365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.516374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:14.938 [2024-11-19 20:19:48.516387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:14.938 [2024-11-19 20:19:48.516395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.516456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:14.938 [2024-11-19 20:19:48.516464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:14.938 [2024-11-19 20:19:48.516472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.516756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:14.938 [2024-11-19 20:19:48.516765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:30:14.938 [2024-11-19 20:19:48.516772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.516853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:14.938 [2024-11-19 20:19:48.516861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:14.938 [2024-11-19 20:19:48.516869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.516901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:14.938 [2024-11-19 20:19:48.516910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:14.938 [2024-11-19 20:19:48.516920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.516939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:14.938 [2024-11-19 20:19:48.521475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.521655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:14.938 [2024-11-19 20:19:48.522107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.541 ms 00:30:14.938 [2024-11-19 20:19:48.522162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.522446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.522496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:14.938 [2024-11-19 20:19:48.522520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:14.938 [2024-11-19 20:19:48.522540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.522615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:14.938 [2024-11-19 20:19:48.522845] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:14.938 [2024-11-19 20:19:48.522915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:14.938 [2024-11-19 20:19:48.522954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:14.938 [2024-11-19 20:19:48.523084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:14.938 [2024-11-19 20:19:48.523133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:14.938 [2024-11-19 20:19:48.523166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:14.938 [2024-11-19 20:19:48.523199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:14.938 [2024-11-19 20:19:48.523378] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:14.938 [2024-11-19 20:19:48.523388] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:14.938 [2024-11-19 20:19:48.523397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:14.938 [2024-11-19 20:19:48.523406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.523414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:14.938 [2024-11-19 20:19:48.523423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:30:14.938 [2024-11-19 20:19:48.523431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.523528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.938 [2024-11-19 20:19:48.523537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:14.938 [2024-11-19 20:19:48.523548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:14.938 [2024-11-19 20:19:48.523558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.938 [2024-11-19 20:19:48.523664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:14.938 [2024-11-19 20:19:48.523675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:14.938 [2024-11-19 20:19:48.523684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:14.938 [2024-11-19 20:19:48.523708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:14.938 [2024-11-19 20:19:48.523729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:14.938 [2024-11-19 20:19:48.523743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:14.938 [2024-11-19 20:19:48.523749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:14.938 [2024-11-19 20:19:48.523756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:14.938 [2024-11-19 20:19:48.523763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:14.938 [2024-11-19 20:19:48.523770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:14.938 [2024-11-19 20:19:48.523777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:14.938 [2024-11-19 20:19:48.523797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:14.938 [2024-11-19 20:19:48.523817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:14.938 [2024-11-19 20:19:48.523837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:14.938 [2024-11-19 20:19:48.523857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:14.938 [2024-11-19 20:19:48.523878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.938 [2024-11-19 20:19:48.523890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:14.938 [2024-11-19 20:19:48.523896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:14.938 [2024-11-19 20:19:48.523910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:14.938 [2024-11-19 20:19:48.523917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:14.938 [2024-11-19 20:19:48.523924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:14.938 [2024-11-19 20:19:48.523932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:14.938 [2024-11-19 20:19:48.523939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:14.938 [2024-11-19 20:19:48.523946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.938 [2024-11-19 20:19:48.523952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:14.938 [2024-11-19 20:19:48.523959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:14.938 [2024-11-19 20:19:48.523966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.939 [2024-11-19 20:19:48.523972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:14.939 [2024-11-19 20:19:48.523980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:14.939 [2024-11-19 20:19:48.523987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:14.939 [2024-11-19 20:19:48.523995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.939 [2024-11-19 20:19:48.524002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:14.939 [2024-11-19 20:19:48.524009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:14.939 [2024-11-19 20:19:48.524016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:14.939 [2024-11-19 20:19:48.524023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:14.939 [2024-11-19 20:19:48.524030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:14.939 [2024-11-19 20:19:48.524036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:14.939 [2024-11-19 20:19:48.524045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:14.939 [2024-11-19 20:19:48.524057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:14.939 [2024-11-19 20:19:48.524073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:14.939 [2024-11-19 20:19:48.524080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:14.939 [2024-11-19 20:19:48.524087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:14.939 [2024-11-19 20:19:48.524094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:14.939 [2024-11-19 20:19:48.524102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:14.939 [2024-11-19 20:19:48.524110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:14.939 [2024-11-19 20:19:48.524116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:14.939 [2024-11-19 20:19:48.524124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:14.939 [2024-11-19 20:19:48.524131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:14.939 [2024-11-19 20:19:48.524169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:14.939 [2024-11-19 20:19:48.524182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:14.939 [2024-11-19 20:19:48.524199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:14.939 [2024-11-19 20:19:48.524207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:14.939 [2024-11-19 20:19:48.524215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:14.939 [2024-11-19 20:19:48.524241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.524251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:14.939 [2024-11-19 20:19:48.524259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:30:14.939 [2024-11-19 20:19:48.524267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.552101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.552147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:14.939 [2024-11-19 20:19:48.552160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.790 ms 00:30:14.939 [2024-11-19 20:19:48.552169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.552272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.552282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:14.939 [2024-11-19 20:19:48.552291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:30:14.939 [2024-11-19 20:19:48.552303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.599476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.599532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:14.939 [2024-11-19 20:19:48.599546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.110 ms 00:30:14.939 [2024-11-19 20:19:48.599555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.599607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.599618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:14.939 [2024-11-19 20:19:48.599628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:14.939 [2024-11-19 20:19:48.599636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.599754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.599766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:14.939 [2024-11-19 20:19:48.599775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:14.939 [2024-11-19 20:19:48.599784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.599912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.599924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:14.939 [2024-11-19 20:19:48.599933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:14.939 [2024-11-19 20:19:48.599941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.615953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.616003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:14.939 [2024-11-19 20:19:48.616014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.993 ms 00:30:14.939 [2024-11-19 20:19:48.616022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.616179] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:14.939 [2024-11-19 20:19:48.616194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:14.939 [2024-11-19 20:19:48.616204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.616215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:14.939 [2024-11-19 20:19:48.616252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:14.939 [2024-11-19 20:19:48.616261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.628529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.628572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:14.939 [2024-11-19 20:19:48.628583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:30:14.939 [2024-11-19 20:19:48.628591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.628717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.628728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:14.939 [2024-11-19 20:19:48.628737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:14.939 [2024-11-19 20:19:48.628750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.628802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.628812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:14.939 [2024-11-19 20:19:48.628821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:14.939 [2024-11-19 20:19:48.628829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.629447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.629469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:14.939 [2024-11-19 20:19:48.629479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:30:14.939 [2024-11-19 20:19:48.629487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.629504] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:14.939 [2024-11-19 20:19:48.629518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.629526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:14.939 [2024-11-19 20:19:48.629533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:14.939 [2024-11-19 20:19:48.629541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.642256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:14.939 [2024-11-19 20:19:48.642592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.642611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:14.939 [2024-11-19 20:19:48.642623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.032 ms 00:30:14.939 [2024-11-19 20:19:48.642631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.644809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.939 [2024-11-19 20:19:48.644847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:14.939 [2024-11-19 20:19:48.644858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:30:14.939 [2024-11-19 20:19:48.644867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.939 [2024-11-19 20:19:48.644966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.940 [2024-11-19 20:19:48.644976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:14.940 [2024-11-19 20:19:48.644986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:14.940 [2024-11-19 20:19:48.644995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.940 [2024-11-19 20:19:48.645019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.940 [2024-11-19 20:19:48.645029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:14.940 [2024-11-19 20:19:48.645042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:14.940 [2024-11-19 20:19:48.645049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.940 [2024-11-19 20:19:48.645080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:14.940 [2024-11-19 20:19:48.645090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.940 [2024-11-19 20:19:48.645097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:14.940 [2024-11-19 20:19:48.645105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:14.940 [2024-11-19 20:19:48.645112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.940 [2024-11-19 20:19:48.672524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.940 [2024-11-19 20:19:48.672585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:14.940 [2024-11-19 20:19:48.672598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.391 ms 00:30:14.940 [2024-11-19 20:19:48.672607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.940 [2024-11-19 20:19:48.672699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.940 [2024-11-19 20:19:48.672709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:14.940 [2024-11-19 20:19:48.672718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:14.940 [2024-11-19 20:19:48.672725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.940 [2024-11-19 20:19:48.673972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.558 ms, result 0 00:30:16.326  [2024-11-19T20:19:51.064Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-19T20:19:52.006Z] Copying: 31/1024 [MB] (18 MBps) [2024-11-19T20:19:52.951Z] Copying: 50/1024 [MB] (18 MBps) [2024-11-19T20:19:53.895Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-19T20:19:55.281Z] Copying: 90/1024 [MB] (23 MBps) [2024-11-19T20:19:56.226Z] Copying: 104/1024 [MB] (13 MBps) [2024-11-19T20:19:57.169Z] Copying: 118/1024 [MB] (14 MBps) [2024-11-19T20:19:58.111Z] Copying: 137/1024 [MB] (19 MBps) [2024-11-19T20:19:59.054Z] Copying: 161/1024 [MB] (23 MBps) [2024-11-19T20:19:59.997Z] Copying: 187/1024 [MB] (26 MBps) [2024-11-19T20:20:00.941Z] Copying: 205/1024 [MB] (17 MBps) [2024-11-19T20:20:01.884Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-19T20:20:03.267Z] Copying: 226/1024 [MB] (10 MBps) [2024-11-19T20:20:04.211Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-19T20:20:05.153Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-19T20:20:06.204Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-19T20:20:07.148Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-19T20:20:08.092Z] Copying: 290/1024 [MB] (18 MBps) [2024-11-19T20:20:09.039Z] Copying: 308/1024 [MB] (17 MBps) [2024-11-19T20:20:09.982Z] Copying: 330/1024 [MB] (22 MBps) [2024-11-19T20:20:10.923Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-19T20:20:11.868Z] Copying: 359/1024 [MB] (18 MBps) [2024-11-19T20:20:13.250Z] Copying: 379/1024 [MB] (19 MBps) [2024-11-19T20:20:14.186Z] Copying: 400/1024 [MB] (21 MBps) [2024-11-19T20:20:15.127Z] Copying: 428/1024 [MB] (27 MBps) [2024-11-19T20:20:16.077Z] Copying: 442/1024 [MB] (13 MBps) [2024-11-19T20:20:17.023Z] Copying: 457/1024 [MB] (15 MBps) [2024-11-19T20:20:17.968Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-19T20:20:18.911Z] Copying: 482/1024 [MB] (13 MBps) [2024-11-19T20:20:20.297Z] Copying: 501/1024 [MB] (18 MBps) [2024-11-19T20:20:20.867Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-19T20:20:22.252Z] Copying: 539/1024 [MB] (27 MBps) [2024-11-19T20:20:23.193Z] Copying: 557/1024 [MB] (18 MBps) [2024-11-19T20:20:24.135Z] Copying: 577/1024 [MB] (19 MBps) [2024-11-19T20:20:25.077Z] Copying: 592/1024 [MB] (15 MBps) [2024-11-19T20:20:26.019Z] Copying: 605/1024 [MB] (12 MBps) [2024-11-19T20:20:26.963Z] Copying: 625/1024 [MB] (20 MBps) [2024-11-19T20:20:27.908Z] Copying: 650/1024 [MB] (24 MBps) [2024-11-19T20:20:29.295Z] Copying: 668/1024 [MB] (18 MBps) [2024-11-19T20:20:29.869Z] Copying: 683/1024 [MB] (14 MBps) [2024-11-19T20:20:31.257Z] Copying: 707/1024 [MB] (23 MBps) [2024-11-19T20:20:32.203Z] Copying: 724/1024 [MB] (17 MBps) [2024-11-19T20:20:33.146Z] Copying: 743/1024 [MB] (19 MBps) [2024-11-19T20:20:34.091Z] Copying: 756/1024 [MB] (12 MBps) [2024-11-19T20:20:35.034Z] Copying: 770/1024 [MB] (13 MBps) [2024-11-19T20:20:35.977Z] Copying: 786/1024 [MB] (16 MBps) [2024-11-19T20:20:36.922Z] Copying: 797/1024 [MB] (11 MBps) [2024-11-19T20:20:37.890Z] Copying: 813/1024 [MB] (15 MBps) [2024-11-19T20:20:39.300Z] Copying: 836/1024 [MB] (23 MBps) [2024-11-19T20:20:39.872Z] Copying: 855/1024 [MB] (18 MBps) [2024-11-19T20:20:41.272Z] Copying: 869/1024 [MB] (14 MBps) [2024-11-19T20:20:42.215Z] Copying: 887/1024 [MB] (17 MBps) [2024-11-19T20:20:43.157Z] Copying: 902/1024 [MB] (14 MBps) [2024-11-19T20:20:44.099Z] Copying: 916/1024 [MB] (13 MBps) [2024-11-19T20:20:45.042Z] Copying: 935/1024 [MB] (18 MBps) [2024-11-19T20:20:45.987Z] Copying: 950/1024 [MB] (15 MBps) [2024-11-19T20:20:46.931Z] Copying: 968/1024 [MB] (18 MBps) [2024-11-19T20:20:47.877Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-19T20:20:49.264Z] Copying: 998/1024 [MB] (16 MBps) [2024-11-19T20:20:49.264Z] Copying: 1018/1024 [MB] (19 MBps) [2024-11-19T20:20:49.835Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 20:20:49.560388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.041 [2024-11-19 20:20:49.560471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:16.041 [2024-11-19 20:20:49.560486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:16.041 [2024-11-19 20:20:49.560494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.041 [2024-11-19 20:20:49.560515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:16.041 [2024-11-19 20:20:49.563058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.041 [2024-11-19 20:20:49.563103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:16.041 [2024-11-19 20:20:49.563114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:31:16.041 [2024-11-19 20:20:49.563123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.041 [2024-11-19 20:20:49.563346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.041 [2024-11-19 20:20:49.563385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:16.041 [2024-11-19 20:20:49.563395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:31:16.041 [2024-11-19 20:20:49.563402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.041 [2024-11-19 20:20:49.563430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.041 [2024-11-19 20:20:49.563442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:16.041 [2024-11-19 20:20:49.563450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:16.041 [2024-11-19 20:20:49.563456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.041 [2024-11-19 20:20:49.563514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.041 [2024-11-19 20:20:49.563522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:16.041 [2024-11-19 20:20:49.563529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:16.041 [2024-11-19 20:20:49.563536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.041 [2024-11-19 20:20:49.563547] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:16.041 [2024-11-19 20:20:49.563559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:16.041 [2024-11-19 20:20:49.563714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.563997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:16.042 [2024-11-19 20:20:49.564204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:16.042 [2024-11-19 20:20:49.564234] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:31:16.042 [2024-11-19 20:20:49.564244] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:16.042 [2024-11-19 20:20:49.564251] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:16.042 [2024-11-19 20:20:49.564257] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:16.042 [2024-11-19 20:20:49.564263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:16.042 [2024-11-19 20:20:49.564270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:16.042 [2024-11-19 20:20:49.564276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:16.043 [2024-11-19 20:20:49.564282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:16.043 [2024-11-19 20:20:49.564287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:16.043 [2024-11-19 20:20:49.564293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:16.043 [2024-11-19 20:20:49.564298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.043 [2024-11-19 20:20:49.564305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:16.043 [2024-11-19 20:20:49.564312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:31:16.043 [2024-11-19 20:20:49.564317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.576556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.043 [2024-11-19 20:20:49.576599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:16.043 [2024-11-19 20:20:49.576610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.223 ms 00:31:16.043 [2024-11-19 20:20:49.576617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.576920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.043 [2024-11-19 20:20:49.576927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:16.043 [2024-11-19 20:20:49.576935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:31:16.043 [2024-11-19 20:20:49.576947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.606436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.606593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:16.043 [2024-11-19 20:20:49.606610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.606619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.606679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.606687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:16.043 [2024-11-19 20:20:49.606694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.606705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.606751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.606760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:16.043 [2024-11-19 20:20:49.606766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.606773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.606786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.606805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:16.043 [2024-11-19 20:20:49.606811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.606817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.669777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.669904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:16.043 [2024-11-19 20:20:49.669918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.669925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:16.043 [2024-11-19 20:20:49.718182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:16.043 [2024-11-19 20:20:49.718273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:16.043 [2024-11-19 20:20:49.718319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:16.043 [2024-11-19 20:20:49.718399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:16.043 [2024-11-19 20:20:49.718435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:16.043 [2024-11-19 20:20:49.718480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:16.043 [2024-11-19 20:20:49.718523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:16.043 [2024-11-19 20:20:49.718529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:16.043 [2024-11-19 20:20:49.718534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.043 [2024-11-19 20:20:49.718624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.218 ms, result 0 00:31:16.613 00:31:16.613 00:31:16.613 20:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:19.163 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:19.163 20:20:52 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:19.163 [2024-11-19 20:20:52.572984] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:31:19.163 [2024-11-19 20:20:52.573402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82986 ] 00:31:19.163 [2024-11-19 20:20:52.732685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.163 [2024-11-19 20:20:52.808248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.423 [2024-11-19 20:20:53.014010] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.423 [2024-11-19 20:20:53.014064] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.423 [2024-11-19 20:20:53.161900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.161937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:19.423 [2024-11-19 20:20:53.161951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:19.423 [2024-11-19 20:20:53.161958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.161991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.161998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.423 [2024-11-19 20:20:53.162006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:19.423 [2024-11-19 20:20:53.162012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.162024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:19.423 [2024-11-19 20:20:53.162546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:19.423 [2024-11-19 20:20:53.162559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.162565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.423 [2024-11-19 20:20:53.162572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:31:19.423 [2024-11-19 20:20:53.162577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.162824] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:19.423 [2024-11-19 20:20:53.162842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.162848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:19.423 [2024-11-19 20:20:53.162857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:19.423 [2024-11-19 20:20:53.162863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.162918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.162931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:19.423 [2024-11-19 20:20:53.162938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:19.423 [2024-11-19 20:20:53.162943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.163139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.163149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.423 [2024-11-19 20:20:53.163154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:19.423 [2024-11-19 20:20:53.163160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.163207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.163214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.423 [2024-11-19 20:20:53.163229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:19.423 [2024-11-19 20:20:53.163236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.163252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.163258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:19.423 [2024-11-19 20:20:53.163264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.423 [2024-11-19 20:20:53.163271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.163283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:19.423 [2024-11-19 20:20:53.166215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.166318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.423 [2024-11-19 20:20:53.166368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:31:19.423 [2024-11-19 20:20:53.166386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.166421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.166473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:19.423 [2024-11-19 20:20:53.166491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:19.423 [2024-11-19 20:20:53.166506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.166589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:19.423 [2024-11-19 20:20:53.166622] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:19.423 [2024-11-19 20:20:53.166773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:19.423 [2024-11-19 20:20:53.166810] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:19.423 [2024-11-19 20:20:53.166905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:19.423 [2024-11-19 20:20:53.166976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:19.423 [2024-11-19 20:20:53.167001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:19.423 [2024-11-19 20:20:53.167025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:19.423 [2024-11-19 20:20:53.167050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:19.423 [2024-11-19 20:20:53.167071] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:19.423 [2024-11-19 20:20:53.167118] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:19.423 [2024-11-19 20:20:53.167134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:19.423 [2024-11-19 20:20:53.167149] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:19.423 [2024-11-19 20:20:53.167164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.167178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:19.423 [2024-11-19 20:20:53.167193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:31:19.423 [2024-11-19 20:20:53.167207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.167294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.423 [2024-11-19 20:20:53.167312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:19.423 [2024-11-19 20:20:53.167327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:19.423 [2024-11-19 20:20:53.167344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.423 [2024-11-19 20:20:53.167429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:19.423 [2024-11-19 20:20:53.167495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:19.423 [2024-11-19 20:20:53.167511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.423 [2024-11-19 20:20:53.167526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.423 [2024-11-19 20:20:53.167566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:19.423 [2024-11-19 20:20:53.167582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:19.423 [2024-11-19 20:20:53.167597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:19.423 [2024-11-19 20:20:53.167611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:19.423 [2024-11-19 20:20:53.167625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:19.423 [2024-11-19 20:20:53.167639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.423 [2024-11-19 20:20:53.167653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:19.423 [2024-11-19 20:20:53.167667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:19.423 [2024-11-19 20:20:53.167709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.423 [2024-11-19 20:20:53.167725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:19.424 [2024-11-19 20:20:53.167739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:19.424 [2024-11-19 20:20:53.167753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:19.424 [2024-11-19 20:20:53.167785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:19.424 [2024-11-19 20:20:53.167799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:19.424 [2024-11-19 20:20:53.167826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.424 [2024-11-19 20:20:53.167873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:19.424 [2024-11-19 20:20:53.167887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.424 [2024-11-19 20:20:53.167914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:19.424 [2024-11-19 20:20:53.167928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.424 [2024-11-19 20:20:53.167955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:19.424 [2024-11-19 20:20:53.167969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:19.424 [2024-11-19 20:20:53.167982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.424 [2024-11-19 20:20:53.168019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:19.424 [2024-11-19 20:20:53.168059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:19.424 [2024-11-19 20:20:53.168092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.424 [2024-11-19 20:20:53.168108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:19.424 [2024-11-19 20:20:53.168122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:19.424 [2024-11-19 20:20:53.168136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.424 [2024-11-19 20:20:53.168149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:19.424 [2024-11-19 20:20:53.168163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:19.424 [2024-11-19 20:20:53.168176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.424 [2024-11-19 20:20:53.168216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:19.424 [2024-11-19 20:20:53.168241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:19.424 [2024-11-19 20:20:53.168255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.424 [2024-11-19 20:20:53.168270] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:19.424 [2024-11-19 20:20:53.168285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:19.424 [2024-11-19 20:20:53.168299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.424 [2024-11-19 20:20:53.168312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.424 [2024-11-19 20:20:53.168328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:19.424 [2024-11-19 20:20:53.168361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:19.424 [2024-11-19 20:20:53.168377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:19.424 [2024-11-19 20:20:53.168411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:19.424 [2024-11-19 20:20:53.168427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:19.424 [2024-11-19 20:20:53.168441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:19.424 [2024-11-19 20:20:53.168448] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:19.424 [2024-11-19 20:20:53.168459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:19.424 [2024-11-19 20:20:53.168471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:19.424 [2024-11-19 20:20:53.168477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:19.424 [2024-11-19 20:20:53.168482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:19.424 [2024-11-19 20:20:53.168487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:19.424 [2024-11-19 20:20:53.168493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:19.424 [2024-11-19 20:20:53.168498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:19.424 [2024-11-19 20:20:53.168503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:19.424 [2024-11-19 20:20:53.168508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:19.424 [2024-11-19 20:20:53.168515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:19.424 [2024-11-19 20:20:53.168543] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:19.424 [2024-11-19 20:20:53.168549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:19.424 [2024-11-19 20:20:53.168560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:19.424 [2024-11-19 20:20:53.168565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:19.424 [2024-11-19 20:20:53.168571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:19.424 [2024-11-19 20:20:53.168576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.424 [2024-11-19 20:20:53.168582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:19.424 [2024-11-19 20:20:53.168588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:31:19.424 [2024-11-19 20:20:53.168593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.424 [2024-11-19 20:20:53.187112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.424 [2024-11-19 20:20:53.187206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.424 [2024-11-19 20:20:53.187265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:31:19.424 [2024-11-19 20:20:53.187283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.424 [2024-11-19 20:20:53.187354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.424 [2024-11-19 20:20:53.187415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:19.424 [2024-11-19 20:20:53.187435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:19.424 [2024-11-19 20:20:53.187453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.235401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.235511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.684 [2024-11-19 20:20:53.235557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.880 ms 00:31:19.684 [2024-11-19 20:20:53.235575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.235619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.235638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.684 [2024-11-19 20:20:53.235653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:19.684 [2024-11-19 20:20:53.235667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.235748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.235769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.684 [2024-11-19 20:20:53.235825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:19.684 [2024-11-19 20:20:53.235842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.235945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.235964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.684 [2024-11-19 20:20:53.236025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:19.684 [2024-11-19 20:20:53.236042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.246469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.246560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.684 [2024-11-19 20:20:53.246598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.403 ms 00:31:19.684 [2024-11-19 20:20:53.246615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.246710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:19.684 [2024-11-19 20:20:53.246740] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:19.684 [2024-11-19 20:20:53.246764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.246778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:19.684 [2024-11-19 20:20:53.246803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:19.684 [2024-11-19 20:20:53.246845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.684 [2024-11-19 20:20:53.255981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.684 [2024-11-19 20:20:53.256063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:19.685 [2024-11-19 20:20:53.256075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:31:19.685 [2024-11-19 20:20:53.256082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.256171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.256178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.685 [2024-11-19 20:20:53.256185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:19.685 [2024-11-19 20:20:53.256190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.256245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.256253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.685 [2024-11-19 20:20:53.256259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:19.685 [2024-11-19 20:20:53.256265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.256688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.256696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.685 [2024-11-19 20:20:53.256703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:31:19.685 [2024-11-19 20:20:53.256708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.256718] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:19.685 [2024-11-19 20:20:53.256727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.256732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.685 [2024-11-19 20:20:53.256738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:19.685 [2024-11-19 20:20:53.256744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.265371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.685 [2024-11-19 20:20:53.265541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.265563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.685 [2024-11-19 20:20:53.265603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.775 ms 00:31:19.685 [2024-11-19 20:20:53.265689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.267301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.267377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.685 [2024-11-19 20:20:53.267416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:31:19.685 [2024-11-19 20:20:53.267432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.267494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.267775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.685 [2024-11-19 20:20:53.267811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:19.685 [2024-11-19 20:20:53.267828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.267880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.267941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.685 [2024-11-19 20:20:53.267965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.685 [2024-11-19 20:20:53.267980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.268013] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.685 [2024-11-19 20:20:53.268031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.268045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.685 [2024-11-19 20:20:53.268062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:19.685 [2024-11-19 20:20:53.268177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.286362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.286464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.685 [2024-11-19 20:20:53.286507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.156 ms 00:31:19.685 [2024-11-19 20:20:53.286525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.286585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.685 [2024-11-19 20:20:53.286624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.685 [2024-11-19 20:20:53.286642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:19.685 [2024-11-19 20:20:53.286656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.685 [2024-11-19 20:20:53.287375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.152 ms, result 0 00:31:20.627  [2024-11-19T20:20:55.362Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-19T20:20:56.307Z] Copying: 51/1024 [MB] (30 MBps) [2024-11-19T20:20:57.681Z] Copying: 71/1024 [MB] (19 MBps) [2024-11-19T20:20:58.615Z] Copying: 113/1024 [MB] (42 MBps) [2024-11-19T20:20:59.560Z] Copying: 159/1024 [MB] (46 MBps) [2024-11-19T20:21:00.498Z] Copying: 180/1024 [MB] (20 MBps) [2024-11-19T20:21:01.440Z] Copying: 200/1024 [MB] (19 MBps) [2024-11-19T20:21:02.374Z] Copying: 217/1024 [MB] (17 MBps) [2024-11-19T20:21:03.307Z] Copying: 260/1024 [MB] (43 MBps) [2024-11-19T20:21:04.695Z] Copying: 307/1024 [MB] (46 MBps) [2024-11-19T20:21:05.639Z] Copying: 325/1024 [MB] (18 MBps) [2024-11-19T20:21:06.581Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-19T20:21:07.521Z] Copying: 363/1024 [MB] (20 MBps) [2024-11-19T20:21:08.465Z] Copying: 386/1024 [MB] (22 MBps) [2024-11-19T20:21:09.439Z] Copying: 410/1024 [MB] (24 MBps) [2024-11-19T20:21:10.409Z] Copying: 435/1024 [MB] (25 MBps) [2024-11-19T20:21:11.354Z] Copying: 462/1024 [MB] (26 MBps) [2024-11-19T20:21:12.737Z] Copying: 482/1024 [MB] (20 MBps) [2024-11-19T20:21:13.309Z] Copying: 499/1024 [MB] (16 MBps) [2024-11-19T20:21:14.698Z] Copying: 514/1024 [MB] (14 MBps) [2024-11-19T20:21:15.644Z] Copying: 526/1024 [MB] (12 MBps) [2024-11-19T20:21:16.589Z] Copying: 542/1024 [MB] (15 MBps) [2024-11-19T20:21:17.535Z] Copying: 555/1024 [MB] (13 MBps) [2024-11-19T20:21:18.480Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-19T20:21:19.425Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-19T20:21:20.367Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-19T20:21:21.313Z] Copying: 611/1024 [MB] (25 MBps) [2024-11-19T20:21:22.704Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-19T20:21:23.648Z] Copying: 643/1024 [MB] (17 MBps) [2024-11-19T20:21:24.593Z] Copying: 655/1024 [MB] (12 MBps) [2024-11-19T20:21:25.539Z] Copying: 665/1024 [MB] (10 MBps) [2024-11-19T20:21:26.484Z] Copying: 675/1024 [MB] (10 MBps) [2024-11-19T20:21:27.426Z] Copying: 702056/1048576 [kB] (9952 kBps) [2024-11-19T20:21:28.369Z] Copying: 699/1024 [MB] (13 MBps) [2024-11-19T20:21:29.315Z] Copying: 714/1024 [MB] (15 MBps) [2024-11-19T20:21:30.705Z] Copying: 727/1024 [MB] (12 MBps) [2024-11-19T20:21:31.646Z] Copying: 741/1024 [MB] (14 MBps) [2024-11-19T20:21:32.588Z] Copying: 757/1024 [MB] (15 MBps) [2024-11-19T20:21:33.531Z] Copying: 771/1024 [MB] (14 MBps) [2024-11-19T20:21:34.472Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-19T20:21:35.415Z] Copying: 803/1024 [MB] (17 MBps) [2024-11-19T20:21:36.361Z] Copying: 823/1024 [MB] (20 MBps) [2024-11-19T20:21:37.306Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-19T20:21:38.697Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-19T20:21:39.643Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-19T20:21:40.588Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-19T20:21:41.624Z] Copying: 888/1024 [MB] (11 MBps) [2024-11-19T20:21:42.576Z] Copying: 904/1024 [MB] (16 MBps) [2024-11-19T20:21:43.521Z] Copying: 918/1024 [MB] (13 MBps) [2024-11-19T20:21:44.467Z] Copying: 931/1024 [MB] (12 MBps) [2024-11-19T20:21:45.410Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-19T20:21:46.350Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-19T20:21:47.734Z] Copying: 978/1024 [MB] (22 MBps) [2024-11-19T20:21:48.305Z] Copying: 989/1024 [MB] (11 MBps) [2024-11-19T20:21:49.694Z] Copying: 1006/1024 [MB] (17 MBps) [2024-11-19T20:21:50.630Z] Copying: 1016/1024 [MB] (10 MBps) [2024-11-19T20:21:50.890Z] Copying: 1048064/1048576 [kB] (7224 kBps) [2024-11-19T20:21:50.890Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-19 20:21:50.737775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.096 [2024-11-19 20:21:50.737826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:17.096 [2024-11-19 20:21:50.737838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:17.096 [2024-11-19 20:21:50.737845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.096 [2024-11-19 20:21:50.739666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:17.096 [2024-11-19 20:21:50.744029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.096 [2024-11-19 20:21:50.744058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:17.096 [2024-11-19 20:21:50.744067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:32:17.096 [2024-11-19 20:21:50.744074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.096 [2024-11-19 20:21:50.751415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.096 [2024-11-19 20:21:50.751447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:17.096 [2024-11-19 20:21:50.751455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.314 ms 00:32:17.096 [2024-11-19 20:21:50.751462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.096 [2024-11-19 20:21:50.751483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.096 [2024-11-19 20:21:50.751490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:17.096 [2024-11-19 20:21:50.751496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:17.096 [2024-11-19 20:21:50.751502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.096 [2024-11-19 20:21:50.751538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.096 [2024-11-19 20:21:50.751546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:17.096 [2024-11-19 20:21:50.751556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:17.096 [2024-11-19 20:21:50.751562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.096 [2024-11-19 20:21:50.751572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:17.096 [2024-11-19 20:21:50.751582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 00:32:17.096 [2024-11-19 20:21:50.751590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:17.096 [2024-11-19 20:21:50.751695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.751997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:17.097 [2024-11-19 20:21:50.752183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:17.097 [2024-11-19 20:21:50.752196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:32:17.097 [2024-11-19 20:21:50.752202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 00:32:17.097 [2024-11-19 20:21:50.752208] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128288 00:32:17.097 [2024-11-19 20:21:50.752213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 00:32:17.097 [2024-11-19 20:21:50.752229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:17.097 [2024-11-19 20:21:50.752235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:17.097 [2024-11-19 20:21:50.752241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:17.098 [2024-11-19 20:21:50.752249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:17.098 [2024-11-19 20:21:50.752254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:17.098 [2024-11-19 20:21:50.752259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:17.098 [2024-11-19 20:21:50.752265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.098 [2024-11-19 20:21:50.752270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:17.098 [2024-11-19 20:21:50.752277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:32:17.098 [2024-11-19 20:21:50.752282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.762015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.098 [2024-11-19 20:21:50.762042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:17.098 [2024-11-19 20:21:50.762050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.722 ms 00:32:17.098 [2024-11-19 20:21:50.762059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.762347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.098 [2024-11-19 20:21:50.762355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:17.098 [2024-11-19 20:21:50.762362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:32:17.098 [2024-11-19 20:21:50.762367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.788280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.098 [2024-11-19 20:21:50.788307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.098 [2024-11-19 20:21:50.788317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.098 [2024-11-19 20:21:50.788323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.788363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.098 [2024-11-19 20:21:50.788369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.098 [2024-11-19 20:21:50.788375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.098 [2024-11-19 20:21:50.788380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.788428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.098 [2024-11-19 20:21:50.788436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.098 [2024-11-19 20:21:50.788442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.098 [2024-11-19 20:21:50.788451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.788462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.098 [2024-11-19 20:21:50.788469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.098 [2024-11-19 20:21:50.788474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.098 [2024-11-19 20:21:50.788479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.098 [2024-11-19 20:21:50.846248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.098 [2024-11-19 20:21:50.846282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.098 [2024-11-19 20:21:50.846293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.098 [2024-11-19 20:21:50.846299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.358 [2024-11-19 20:21:50.893463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.358 [2024-11-19 20:21:50.893517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.358 [2024-11-19 20:21:50.893576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.358 [2024-11-19 20:21:50.893647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:17.358 [2024-11-19 20:21:50.893685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.358 [2024-11-19 20:21:50.893729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.358 [2024-11-19 20:21:50.893775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.358 [2024-11-19 20:21:50.893780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.358 [2024-11-19 20:21:50.893786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.358 [2024-11-19 20:21:50.893874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 157.723 ms, result 0 00:32:18.739 00:32:18.739 00:32:18.739 20:21:52 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:18.739 [2024-11-19 20:21:52.322124] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:32:18.739 [2024-11-19 20:21:52.322443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83601 ] 00:32:18.739 [2024-11-19 20:21:52.476746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.997 [2024-11-19 20:21:52.554866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.997 [2024-11-19 20:21:52.758945] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:18.997 [2024-11-19 20:21:52.758990] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:19.257 [2024-11-19 20:21:52.906019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.906055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:19.257 [2024-11-19 20:21:52.906069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:19.257 [2024-11-19 20:21:52.906075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.906107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.906115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.257 [2024-11-19 20:21:52.906123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:19.257 [2024-11-19 20:21:52.906128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.906141] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:19.257 [2024-11-19 20:21:52.906702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:19.257 [2024-11-19 20:21:52.906720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.906726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.257 [2024-11-19 20:21:52.906732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:32:19.257 [2024-11-19 20:21:52.906738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.906933] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:19.257 [2024-11-19 20:21:52.906949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.906956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:19.257 [2024-11-19 20:21:52.906964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:19.257 [2024-11-19 20:21:52.906970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.907001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.907008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:19.257 [2024-11-19 20:21:52.907014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:19.257 [2024-11-19 20:21:52.907020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.907246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.907257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.257 [2024-11-19 20:21:52.907263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:32:19.257 [2024-11-19 20:21:52.907268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.257 [2024-11-19 20:21:52.907316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.257 [2024-11-19 20:21:52.907322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.257 [2024-11-19 20:21:52.907329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:19.258 [2024-11-19 20:21:52.907334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.907352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.258 [2024-11-19 20:21:52.907359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:19.258 [2024-11-19 20:21:52.907365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.258 [2024-11-19 20:21:52.907372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.907384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:19.258 [2024-11-19 20:21:52.910098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.258 [2024-11-19 20:21:52.910228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.258 [2024-11-19 20:21:52.910241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:32:19.258 [2024-11-19 20:21:52.910246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.910273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.258 [2024-11-19 20:21:52.910279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:19.258 [2024-11-19 20:21:52.910286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:19.258 [2024-11-19 20:21:52.910292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.910323] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:19.258 [2024-11-19 20:21:52.910341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:19.258 [2024-11-19 20:21:52.910370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:19.258 [2024-11-19 20:21:52.910381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:19.258 [2024-11-19 20:21:52.910459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:19.258 [2024-11-19 20:21:52.910467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:19.258 [2024-11-19 20:21:52.910475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:19.258 [2024-11-19 20:21:52.910483] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910490] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910496] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:19.258 [2024-11-19 20:21:52.910503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:19.258 [2024-11-19 20:21:52.910509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:19.258 [2024-11-19 20:21:52.910514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:19.258 [2024-11-19 20:21:52.910520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.258 [2024-11-19 20:21:52.910525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:19.258 [2024-11-19 20:21:52.910530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:32:19.258 [2024-11-19 20:21:52.910536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.910599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.258 [2024-11-19 20:21:52.910605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:19.258 [2024-11-19 20:21:52.910611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:19.258 [2024-11-19 20:21:52.910618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.258 [2024-11-19 20:21:52.910692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:19.258 [2024-11-19 20:21:52.910699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:19.258 [2024-11-19 20:21:52.910705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:19.258 [2024-11-19 20:21:52.910721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:19.258 [2024-11-19 20:21:52.910738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.258 [2024-11-19 20:21:52.910748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:19.258 [2024-11-19 20:21:52.910755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:19.258 [2024-11-19 20:21:52.910761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.258 [2024-11-19 20:21:52.910766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:19.258 [2024-11-19 20:21:52.910771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:19.258 [2024-11-19 20:21:52.910776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:19.258 [2024-11-19 20:21:52.910790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:19.258 [2024-11-19 20:21:52.910805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:19.258 [2024-11-19 20:21:52.910820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:19.258 [2024-11-19 20:21:52.910834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:19.258 [2024-11-19 20:21:52.910858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:19.258 [2024-11-19 20:21:52.910873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.258 [2024-11-19 20:21:52.910882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:19.258 [2024-11-19 20:21:52.910887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:19.258 [2024-11-19 20:21:52.910892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.258 [2024-11-19 20:21:52.910897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:19.258 [2024-11-19 20:21:52.910902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:19.258 [2024-11-19 20:21:52.910907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:19.258 [2024-11-19 20:21:52.910917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:19.258 [2024-11-19 20:21:52.910922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910927] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:19.258 [2024-11-19 20:21:52.910934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:19.258 [2024-11-19 20:21:52.910939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.258 [2024-11-19 20:21:52.910950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:19.258 [2024-11-19 20:21:52.910955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:19.258 [2024-11-19 20:21:52.910961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:19.258 [2024-11-19 20:21:52.910966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:19.258 [2024-11-19 20:21:52.910971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:19.258 [2024-11-19 20:21:52.910976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:19.258 [2024-11-19 20:21:52.910982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:19.258 [2024-11-19 20:21:52.910990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.258 [2024-11-19 20:21:52.910996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:19.258 [2024-11-19 20:21:52.911002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:19.258 [2024-11-19 20:21:52.911007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:19.258 [2024-11-19 20:21:52.911012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:19.258 [2024-11-19 20:21:52.911017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:19.258 [2024-11-19 20:21:52.911023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:19.258 [2024-11-19 20:21:52.911028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:19.258 [2024-11-19 20:21:52.911033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:19.258 [2024-11-19 20:21:52.911038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:19.258 [2024-11-19 20:21:52.911043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:19.258 [2024-11-19 20:21:52.911049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:19.259 [2024-11-19 20:21:52.911054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:19.259 [2024-11-19 20:21:52.911059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:19.259 [2024-11-19 20:21:52.911064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:19.259 [2024-11-19 20:21:52.911070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:19.259 [2024-11-19 20:21:52.911075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.259 [2024-11-19 20:21:52.911082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:19.259 [2024-11-19 20:21:52.911087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:19.259 [2024-11-19 20:21:52.911092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:19.259 [2024-11-19 20:21:52.911098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:19.259 [2024-11-19 20:21:52.911103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.911109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:19.259 [2024-11-19 20:21:52.911114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:32:19.259 [2024-11-19 20:21:52.911120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.929273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.929377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.259 [2024-11-19 20:21:52.929389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.124 ms 00:32:19.259 [2024-11-19 20:21:52.929395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.929458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.929464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:19.259 [2024-11-19 20:21:52.929470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:19.259 [2024-11-19 20:21:52.929479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.966485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.966514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.259 [2024-11-19 20:21:52.966523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.965 ms 00:32:19.259 [2024-11-19 20:21:52.966529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.966561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.966570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.259 [2024-11-19 20:21:52.966576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.259 [2024-11-19 20:21:52.966581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.966650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.966659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.259 [2024-11-19 20:21:52.966666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:19.259 [2024-11-19 20:21:52.966672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.966757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.966766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.259 [2024-11-19 20:21:52.966772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:19.259 [2024-11-19 20:21:52.966778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.977311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.977340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.259 [2024-11-19 20:21:52.977348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.520 ms 00:32:19.259 [2024-11-19 20:21:52.977354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.977435] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:19.259 [2024-11-19 20:21:52.977444] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:19.259 [2024-11-19 20:21:52.977451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.977457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:19.259 [2024-11-19 20:21:52.977465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:19.259 [2024-11-19 20:21:52.977471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.987931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.988032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:19.259 [2024-11-19 20:21:52.988044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.448 ms 00:32:19.259 [2024-11-19 20:21:52.988050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.988139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.988146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:19.259 [2024-11-19 20:21:52.988152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:19.259 [2024-11-19 20:21:52.988158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.988194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.988202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:19.259 [2024-11-19 20:21:52.988208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:19.259 [2024-11-19 20:21:52.988213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.988675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.988686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:19.259 [2024-11-19 20:21:52.988693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:32:19.259 [2024-11-19 20:21:52.988698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.988709] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:19.259 [2024-11-19 20:21:52.988719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.988725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:19.259 [2024-11-19 20:21:52.988731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:19.259 [2024-11-19 20:21:52.988736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.997304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:19.259 [2024-11-19 20:21:52.997407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.997415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:19.259 [2024-11-19 20:21:52.997421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.658 ms 00:32:19.259 [2024-11-19 20:21:52.997427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.999114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.999203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:19.259 [2024-11-19 20:21:52.999239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:32:19.259 [2024-11-19 20:21:52.999247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.999289] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:19.259 [2024-11-19 20:21:52.999629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.999646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:19.259 [2024-11-19 20:21:52.999653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:32:19.259 [2024-11-19 20:21:52.999658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.999697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.999708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:19.259 [2024-11-19 20:21:52.999714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.259 [2024-11-19 20:21:52.999720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:52.999742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:19.259 [2024-11-19 20:21:52.999750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:52.999755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:19.259 [2024-11-19 20:21:52.999761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:19.259 [2024-11-19 20:21:52.999767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:53.017819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:53.017918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:19.259 [2024-11-19 20:21:53.017931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.039 ms 00:32:19.259 [2024-11-19 20:21:53.017937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:53.017987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.259 [2024-11-19 20:21:53.017994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:19.259 [2024-11-19 20:21:53.018001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:19.259 [2024-11-19 20:21:53.018007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.259 [2024-11-19 20:21:53.018714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.387 ms, result 0 00:32:20.643  [2024-11-19T20:21:55.381Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-19T20:21:56.326Z] Copying: 41/1024 [MB] (14 MBps) [2024-11-19T20:21:57.270Z] Copying: 58/1024 [MB] (17 MBps) [2024-11-19T20:21:58.212Z] Copying: 75/1024 [MB] (16 MBps) [2024-11-19T20:21:59.597Z] Copying: 85/1024 [MB] (10 MBps) [2024-11-19T20:22:00.170Z] Copying: 95/1024 [MB] (10 MBps) [2024-11-19T20:22:01.557Z] Copying: 114/1024 [MB] (18 MBps) [2024-11-19T20:22:02.501Z] Copying: 133/1024 [MB] (19 MBps) [2024-11-19T20:22:03.444Z] Copying: 148/1024 [MB] (15 MBps) [2024-11-19T20:22:04.387Z] Copying: 164/1024 [MB] (15 MBps) [2024-11-19T20:22:05.329Z] Copying: 175/1024 [MB] (11 MBps) [2024-11-19T20:22:06.274Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-19T20:22:07.217Z] Copying: 200/1024 [MB] (13 MBps) [2024-11-19T20:22:08.605Z] Copying: 212/1024 [MB] (11 MBps) [2024-11-19T20:22:09.260Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-19T20:22:10.201Z] Copying: 240/1024 [MB] (15 MBps) [2024-11-19T20:22:11.585Z] Copying: 266/1024 [MB] (26 MBps) [2024-11-19T20:22:12.529Z] Copying: 279/1024 [MB] (13 MBps) [2024-11-19T20:22:13.474Z] Copying: 298/1024 [MB] (18 MBps) [2024-11-19T20:22:14.419Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-19T20:22:15.364Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-19T20:22:16.309Z] Copying: 337/1024 [MB] (18 MBps) [2024-11-19T20:22:17.252Z] Copying: 355/1024 [MB] (17 MBps) [2024-11-19T20:22:18.191Z] Copying: 370/1024 [MB] (15 MBps) [2024-11-19T20:22:19.573Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-19T20:22:20.513Z] Copying: 400/1024 [MB] (12 MBps) [2024-11-19T20:22:21.456Z] Copying: 420/1024 [MB] (20 MBps) [2024-11-19T20:22:22.399Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-19T20:22:23.337Z] Copying: 448/1024 [MB] (16 MBps) [2024-11-19T20:22:24.277Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-19T20:22:25.217Z] Copying: 472/1024 [MB] (11 MBps) [2024-11-19T20:22:26.176Z] Copying: 484/1024 [MB] (12 MBps) [2024-11-19T20:22:27.560Z] Copying: 505/1024 [MB] (20 MBps) [2024-11-19T20:22:28.501Z] Copying: 518/1024 [MB] (12 MBps) [2024-11-19T20:22:29.446Z] Copying: 531/1024 [MB] (13 MBps) [2024-11-19T20:22:30.392Z] Copying: 546/1024 [MB] (14 MBps) [2024-11-19T20:22:31.335Z] Copying: 559/1024 [MB] (13 MBps) [2024-11-19T20:22:32.279Z] Copying: 572/1024 [MB] (12 MBps) [2024-11-19T20:22:33.224Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-19T20:22:34.169Z] Copying: 598/1024 [MB] (15 MBps) [2024-11-19T20:22:35.557Z] Copying: 613/1024 [MB] (15 MBps) [2024-11-19T20:22:36.501Z] Copying: 629/1024 [MB] (15 MBps) [2024-11-19T20:22:37.445Z] Copying: 649/1024 [MB] (19 MBps) [2024-11-19T20:22:38.484Z] Copying: 661/1024 [MB] (12 MBps) [2024-11-19T20:22:39.429Z] Copying: 682/1024 [MB] (20 MBps) [2024-11-19T20:22:40.371Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-19T20:22:41.313Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-19T20:22:42.255Z] Copying: 726/1024 [MB] (17 MBps) [2024-11-19T20:22:43.196Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-19T20:22:44.584Z] Copying: 761/1024 [MB] (23 MBps) [2024-11-19T20:22:45.530Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-19T20:22:46.475Z] Copying: 794/1024 [MB] (15 MBps) [2024-11-19T20:22:47.421Z] Copying: 807/1024 [MB] (12 MBps) [2024-11-19T20:22:48.365Z] Copying: 822/1024 [MB] (14 MBps) [2024-11-19T20:22:49.310Z] Copying: 838/1024 [MB] (16 MBps) [2024-11-19T20:22:50.255Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-19T20:22:51.200Z] Copying: 863/1024 [MB] (14 MBps) [2024-11-19T20:22:52.589Z] Copying: 879/1024 [MB] (15 MBps) [2024-11-19T20:22:53.533Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-19T20:22:54.477Z] Copying: 906/1024 [MB] (15 MBps) [2024-11-19T20:22:55.421Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-19T20:22:56.362Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-19T20:22:57.307Z] Copying: 948/1024 [MB] (19 MBps) [2024-11-19T20:22:58.252Z] Copying: 967/1024 [MB] (18 MBps) [2024-11-19T20:22:59.195Z] Copying: 978/1024 [MB] (10 MBps) [2024-11-19T20:23:00.584Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-19T20:23:01.528Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-19T20:23:02.476Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-19T20:23:02.476Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-19T20:23:02.476Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 20:23:02.331398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.682 [2024-11-19 20:23:02.331618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:28.682 [2024-11-19 20:23:02.331855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:28.682 [2024-11-19 20:23:02.331898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.682 [2024-11-19 20:23:02.331951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:28.682 [2024-11-19 20:23:02.335026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.682 [2024-11-19 20:23:02.335161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:28.682 [2024-11-19 20:23:02.335181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:33:28.682 [2024-11-19 20:23:02.335190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.682 [2024-11-19 20:23:02.335447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.682 [2024-11-19 20:23:02.335459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:28.682 [2024-11-19 20:23:02.335469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:33:28.682 [2024-11-19 20:23:02.335477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.682 [2024-11-19 20:23:02.335505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.682 [2024-11-19 20:23:02.335514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:28.682 [2024-11-19 20:23:02.335523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:28.682 [2024-11-19 20:23:02.335531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.682 [2024-11-19 20:23:02.335587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.682 [2024-11-19 20:23:02.335595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:28.682 [2024-11-19 20:23:02.335606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:28.682 [2024-11-19 20:23:02.335614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.682 [2024-11-19 20:23:02.335628] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:28.682 [2024-11-19 20:23:02.335640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:28.682 [2024-11-19 20:23:02.335651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:28.682 [2024-11-19 20:23:02.335838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.335986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:28.683 [2024-11-19 20:23:02.336442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:28.683 [2024-11-19 20:23:02.336449] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6cd190ad-34d2-462b-b0b7-b02d87ff5233 00:33:28.683 [2024-11-19 20:23:02.336457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:28.683 [2024-11-19 20:23:02.336465] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2848 00:33:28.683 [2024-11-19 20:23:02.336472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2816 00:33:28.683 [2024-11-19 20:23:02.336479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:33:28.683 [2024-11-19 20:23:02.336488] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:28.683 [2024-11-19 20:23:02.336498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:28.683 [2024-11-19 20:23:02.336506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:28.683 [2024-11-19 20:23:02.336513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:28.683 [2024-11-19 20:23:02.336520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:28.683 [2024-11-19 20:23:02.336527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.683 [2024-11-19 20:23:02.336535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:28.683 [2024-11-19 20:23:02.336544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:33:28.683 [2024-11-19 20:23:02.336551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.683 [2024-11-19 20:23:02.352695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.683 [2024-11-19 20:23:02.352730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:28.683 [2024-11-19 20:23:02.352741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.128 ms 00:33:28.683 [2024-11-19 20:23:02.352756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.683 [2024-11-19 20:23:02.353133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.683 [2024-11-19 20:23:02.353142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:28.684 [2024-11-19 20:23:02.353150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:33:28.684 [2024-11-19 20:23:02.353158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.684 [2024-11-19 20:23:02.389425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.684 [2024-11-19 20:23:02.389462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:28.684 [2024-11-19 20:23:02.389474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.684 [2024-11-19 20:23:02.389484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.684 [2024-11-19 20:23:02.389558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.684 [2024-11-19 20:23:02.389568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:28.684 [2024-11-19 20:23:02.389578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.684 [2024-11-19 20:23:02.389588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.684 [2024-11-19 20:23:02.389670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.684 [2024-11-19 20:23:02.389681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:28.684 [2024-11-19 20:23:02.389694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.684 [2024-11-19 20:23:02.389702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.684 [2024-11-19 20:23:02.389720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.684 [2024-11-19 20:23:02.389729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:28.684 [2024-11-19 20:23:02.389738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.684 [2024-11-19 20:23:02.389747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.472397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.472457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.946 [2024-11-19 20:23:02.472470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.472478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.541972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.946 [2024-11-19 20:23:02.542034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.946 [2024-11-19 20:23:02.542126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.946 [2024-11-19 20:23:02.542246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.946 [2024-11-19 20:23:02.542354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:28.946 [2024-11-19 20:23:02.542412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.946 [2024-11-19 20:23:02.542482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.946 [2024-11-19 20:23:02.542547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.946 [2024-11-19 20:23:02.542555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.946 [2024-11-19 20:23:02.542563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.946 [2024-11-19 20:23:02.542697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.262 ms, result 0 00:33:29.519 00:33:29.519 00:33:29.780 20:23:03 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:32.329 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:32.329 Process with pid 81615 is not found 00:33:32.329 Remove shared memory files 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81615 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81615 ']' 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81615 00:33:32.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81615) - No such process 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81615 is not found' 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_band_md /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_l2p_l1 /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_l2p_l2 /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_l2p_l2_ctx /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_nvc_md /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_p2l_pool /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_sb /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_sb_shm /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_trim_bitmap /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_trim_log /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_trim_md /dev/hugepages/ftl_6cd190ad-34d2-462b-b0b7-b02d87ff5233_vmap 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:32.329 20:23:05 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:32.329 ************************************ 00:33:32.330 END TEST ftl_restore_fast 00:33:32.330 ************************************ 00:33:32.330 00:33:32.330 real 4m28.560s 00:33:32.330 user 4m17.246s 00:33:32.330 sys 0m11.084s 00:33:32.330 20:23:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.330 20:23:05 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:32.330 Process with pid 72187 is not found 00:33:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@14 -- # killprocess 72187 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 72187 ']' 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@958 -- # kill -0 72187 00:33:32.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72187) - No such process 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72187 is not found' 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84356 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84356 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 84356 ']' 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.330 20:23:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:32.330 20:23:05 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:32.330 [2024-11-19 20:23:05.877051] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:32.330 [2024-11-19 20:23:05.877205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84356 ] 00:33:32.330 [2024-11-19 20:23:06.040965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.591 [2024-11-19 20:23:06.157960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.190 20:23:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.190 20:23:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:33.190 20:23:06 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:33.460 nvme0n1 00:33:33.460 20:23:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:33.460 20:23:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:33.460 20:23:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:33.722 20:23:07 ftl -- ftl/common.sh@28 -- # stores=98159960-1a8a-4f71-a55c-a356310e369d 00:33:33.722 20:23:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:33.722 20:23:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 98159960-1a8a-4f71-a55c-a356310e369d 00:33:33.982 20:23:07 ftl -- ftl/ftl.sh@23 -- # killprocess 84356 00:33:33.982 20:23:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 84356 ']' 00:33:33.982 20:23:07 ftl -- common/autotest_common.sh@958 -- # kill -0 84356 00:33:33.982 20:23:07 ftl -- common/autotest_common.sh@959 -- # uname 00:33:33.982 20:23:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84356 00:33:33.983 killing process with pid 84356 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84356' 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@973 -- # kill 84356 00:33:33.983 20:23:07 ftl -- common/autotest_common.sh@978 -- # wait 84356 00:33:35.369 20:23:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:35.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:35.631 Waiting for block devices as requested 00:33:35.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:35.631 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:35.891 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:35.891 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:41.190 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:41.190 20:23:14 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:41.190 Remove shared memory files 00:33:41.190 20:23:14 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:41.190 20:23:14 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:41.190 20:23:14 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:41.191 20:23:14 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:41.191 20:23:14 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:41.191 20:23:14 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:41.191 ************************************ 00:33:41.191 END TEST ftl 00:33:41.191 ************************************ 00:33:41.191 00:33:41.191 real 18m22.493s 00:33:41.191 user 20m19.557s 00:33:41.191 sys 1m31.936s 00:33:41.191 20:23:14 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.191 20:23:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:41.191 20:23:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:41.191 20:23:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:41.191 20:23:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:41.191 20:23:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:41.191 20:23:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:41.191 20:23:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:41.191 20:23:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:41.191 20:23:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:41.191 20:23:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:41.191 20:23:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:41.191 20:23:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:41.191 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:33:41.191 20:23:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:41.191 20:23:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:41.191 20:23:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:41.191 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:33:42.578 INFO: APP EXITING 00:33:42.578 INFO: killing all VMs 00:33:42.578 INFO: killing vhost app 00:33:42.578 INFO: EXIT DONE 00:33:42.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:43.101 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:43.101 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:43.101 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:43.101 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:43.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:43.935 Cleaning 00:33:43.935 Removing: /var/run/dpdk/spdk0/config 00:33:43.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:43.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:43.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:43.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:43.935 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:43.935 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:43.935 Removing: /var/run/dpdk/spdk0 00:33:43.935 Removing: /var/run/dpdk/spdk_pid56894 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57091 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57303 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57396 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57430 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57553 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57570 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57759 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57852 00:33:43.935 Removing: /var/run/dpdk/spdk_pid57942 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58048 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58134 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58179 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58210 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58280 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58364 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58796 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58849 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58901 00:33:43.935 Removing: /var/run/dpdk/spdk_pid58917 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59008 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59024 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59126 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59137 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59195 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59208 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59261 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59273 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59433 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59464 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59553 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59725 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59809 00:33:43.935 Removing: /var/run/dpdk/spdk_pid59840 00:33:43.935 Removing: /var/run/dpdk/spdk_pid60289 00:33:44.226 Removing: /var/run/dpdk/spdk_pid60387 00:33:44.226 Removing: /var/run/dpdk/spdk_pid60496 00:33:44.226 Removing: /var/run/dpdk/spdk_pid60549 00:33:44.226 Removing: /var/run/dpdk/spdk_pid60575 00:33:44.226 Removing: /var/run/dpdk/spdk_pid60653 00:33:44.226 Removing: /var/run/dpdk/spdk_pid61278 00:33:44.226 Removing: /var/run/dpdk/spdk_pid61314 00:33:44.226 Removing: /var/run/dpdk/spdk_pid61793 00:33:44.226 Removing: /var/run/dpdk/spdk_pid61891 00:33:44.226 Removing: /var/run/dpdk/spdk_pid62006 00:33:44.226 Removing: /var/run/dpdk/spdk_pid62059 00:33:44.226 Removing: /var/run/dpdk/spdk_pid62090 00:33:44.226 Removing: /var/run/dpdk/spdk_pid62110 00:33:44.226 Removing: /var/run/dpdk/spdk_pid63981 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64107 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64111 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64134 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64174 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64178 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64190 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64235 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64239 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64251 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64296 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64300 00:33:44.226 Removing: /var/run/dpdk/spdk_pid64312 00:33:44.226 Removing: /var/run/dpdk/spdk_pid65678 00:33:44.226 Removing: /var/run/dpdk/spdk_pid65770 00:33:44.226 Removing: /var/run/dpdk/spdk_pid67169 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68550 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68633 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68720 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68796 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68896 00:33:44.226 Removing: /var/run/dpdk/spdk_pid68967 00:33:44.226 Removing: /var/run/dpdk/spdk_pid69109 00:33:44.226 Removing: /var/run/dpdk/spdk_pid69468 00:33:44.226 Removing: /var/run/dpdk/spdk_pid69499 00:33:44.226 Removing: /var/run/dpdk/spdk_pid69947 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70127 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70228 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70344 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70387 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70418 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70715 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70771 00:33:44.226 Removing: /var/run/dpdk/spdk_pid70843 00:33:44.226 Removing: /var/run/dpdk/spdk_pid71234 00:33:44.226 Removing: /var/run/dpdk/spdk_pid71384 00:33:44.226 Removing: /var/run/dpdk/spdk_pid72187 00:33:44.226 Removing: /var/run/dpdk/spdk_pid72319 00:33:44.226 Removing: /var/run/dpdk/spdk_pid72490 00:33:44.226 Removing: /var/run/dpdk/spdk_pid72576 00:33:44.226 Removing: /var/run/dpdk/spdk_pid72884 00:33:44.226 Removing: /var/run/dpdk/spdk_pid73170 00:33:44.226 Removing: /var/run/dpdk/spdk_pid73529 00:33:44.226 Removing: /var/run/dpdk/spdk_pid73714 00:33:44.226 Removing: /var/run/dpdk/spdk_pid73916 00:33:44.226 Removing: /var/run/dpdk/spdk_pid73963 00:33:44.226 Removing: /var/run/dpdk/spdk_pid74150 00:33:44.226 Removing: /var/run/dpdk/spdk_pid74185 00:33:44.226 Removing: /var/run/dpdk/spdk_pid74232 00:33:44.226 Removing: /var/run/dpdk/spdk_pid74524 00:33:44.226 Removing: /var/run/dpdk/spdk_pid74768 00:33:44.226 Removing: /var/run/dpdk/spdk_pid75364 00:33:44.226 Removing: /var/run/dpdk/spdk_pid76076 00:33:44.226 Removing: /var/run/dpdk/spdk_pid76716 00:33:44.226 Removing: /var/run/dpdk/spdk_pid77517 00:33:44.226 Removing: /var/run/dpdk/spdk_pid77661 00:33:44.226 Removing: /var/run/dpdk/spdk_pid77741 00:33:44.226 Removing: /var/run/dpdk/spdk_pid78294 00:33:44.226 Removing: /var/run/dpdk/spdk_pid78350 00:33:44.226 Removing: /var/run/dpdk/spdk_pid79333 00:33:44.226 Removing: /var/run/dpdk/spdk_pid79795 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80559 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80681 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80730 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80787 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80843 00:33:44.226 Removing: /var/run/dpdk/spdk_pid80908 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81127 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81212 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81279 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81335 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81364 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81431 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81615 00:33:44.226 Removing: /var/run/dpdk/spdk_pid81835 00:33:44.226 Removing: /var/run/dpdk/spdk_pid82338 00:33:44.226 Removing: /var/run/dpdk/spdk_pid82986 00:33:44.226 Removing: /var/run/dpdk/spdk_pid83601 00:33:44.226 Removing: /var/run/dpdk/spdk_pid84356 00:33:44.226 Clean 00:33:44.488 20:23:18 -- common/autotest_common.sh@1453 -- # return 0 00:33:44.488 20:23:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:44.488 20:23:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:44.488 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:33:44.488 20:23:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:44.488 20:23:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:44.488 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:33:44.488 20:23:18 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:44.488 20:23:18 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:44.488 20:23:18 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:44.488 20:23:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:44.489 20:23:18 -- spdk/autotest.sh@398 -- # hostname 00:33:44.489 20:23:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:44.489 geninfo: WARNING: invalid characters removed from testname! 00:34:11.166 20:23:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:13.080 20:23:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:14.985 20:23:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:17.527 20:23:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:20.834 20:23:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:23.386 20:23:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:25.294 20:23:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:25.294 20:23:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:25.294 20:23:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:25.294 20:23:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:25.294 20:23:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:25.294 20:23:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:25.294 + [[ -n 5037 ]] 00:34:25.294 + sudo kill 5037 00:34:25.305 [Pipeline] } 00:34:25.321 [Pipeline] // timeout 00:34:25.326 [Pipeline] } 00:34:25.341 [Pipeline] // stage 00:34:25.346 [Pipeline] } 00:34:25.361 [Pipeline] // catchError 00:34:25.370 [Pipeline] stage 00:34:25.372 [Pipeline] { (Stop VM) 00:34:25.385 [Pipeline] sh 00:34:25.669 + vagrant halt 00:34:28.268 ==> default: Halting domain... 00:34:34.872 [Pipeline] sh 00:34:35.156 + vagrant destroy -f 00:34:37.702 ==> default: Removing domain... 00:34:38.656 [Pipeline] sh 00:34:38.939 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:38.950 [Pipeline] } 00:34:38.969 [Pipeline] // stage 00:34:38.975 [Pipeline] } 00:34:38.992 [Pipeline] // dir 00:34:38.999 [Pipeline] } 00:34:39.014 [Pipeline] // wrap 00:34:39.020 [Pipeline] } 00:34:39.034 [Pipeline] // catchError 00:34:39.045 [Pipeline] stage 00:34:39.048 [Pipeline] { (Epilogue) 00:34:39.063 [Pipeline] sh 00:34:39.349 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:44.643 [Pipeline] catchError 00:34:44.645 [Pipeline] { 00:34:44.658 [Pipeline] sh 00:34:44.945 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:44.945 Artifacts sizes are good 00:34:44.956 [Pipeline] } 00:34:44.970 [Pipeline] // catchError 00:34:44.982 [Pipeline] archiveArtifacts 00:34:44.989 Archiving artifacts 00:34:45.097 [Pipeline] cleanWs 00:34:45.110 [WS-CLEANUP] Deleting project workspace... 00:34:45.110 [WS-CLEANUP] Deferred wipeout is used... 00:34:45.117 [WS-CLEANUP] done 00:34:45.119 [Pipeline] } 00:34:45.135 [Pipeline] // stage 00:34:45.140 [Pipeline] } 00:34:45.155 [Pipeline] // node 00:34:45.160 [Pipeline] End of Pipeline 00:34:45.200 Finished: SUCCESS