00:00:00.001 Started by upstream project "autotest-per-patch" build number 132361 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.159 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.159 The recommended git tool is: git 00:00:00.160 using credential 00000000-0000-0000-0000-000000000002 00:00:00.161 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.205 Fetching changes from the remote Git repository 00:00:00.207 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.238 Using shallow fetch with depth 1 00:00:00.238 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.238 > git --version # timeout=10 00:00:00.261 > git --version # 'git version 2.39.2' 00:00:00.261 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.936 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.948 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.960 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.960 > git config core.sparsecheckout # timeout=10 00:00:06.970 > git read-tree -mu HEAD # timeout=10 00:00:06.985 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.004 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.004 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.090 [Pipeline] Start of Pipeline 00:00:07.101 [Pipeline] library 00:00:07.102 Loading library shm_lib@master 00:00:07.102 Library shm_lib@master is cached. Copying from home. 00:00:07.115 [Pipeline] node 00:00:07.124 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:07.126 [Pipeline] { 00:00:07.134 [Pipeline] catchError 00:00:07.135 [Pipeline] { 00:00:07.146 [Pipeline] wrap 00:00:07.154 [Pipeline] { 00:00:07.161 [Pipeline] stage 00:00:07.162 [Pipeline] { (Prologue) 00:00:07.181 [Pipeline] echo 00:00:07.182 Node: VM-host-WFP1 00:00:07.188 [Pipeline] cleanWs 00:00:07.198 [WS-CLEANUP] Deleting project workspace... 00:00:07.198 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.204 [WS-CLEANUP] done 00:00:07.425 [Pipeline] setCustomBuildProperty 00:00:07.523 [Pipeline] httpRequest 00:00:08.219 [Pipeline] echo 00:00:08.221 Sorcerer 10.211.164.20 is alive 00:00:08.229 [Pipeline] retry 00:00:08.231 [Pipeline] { 00:00:08.242 [Pipeline] httpRequest 00:00:08.246 HttpMethod: GET 00:00:08.247 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.247 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.266 Response Code: HTTP/1.1 200 OK 00:00:08.266 Success: Status code 200 is in the accepted range: 200,404 00:00:08.267 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.024 [Pipeline] } 00:00:14.041 [Pipeline] // retry 00:00:14.050 [Pipeline] sh 00:00:14.342 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.358 [Pipeline] httpRequest 00:00:14.682 [Pipeline] echo 00:00:14.683 Sorcerer 10.211.164.20 is alive 00:00:14.703 [Pipeline] retry 00:00:14.706 [Pipeline] { 00:00:14.728 [Pipeline] httpRequest 00:00:14.737 HttpMethod: GET 00:00:14.738 URL: http://10.211.164.20/packages/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:14.739 Sending request to url: http://10.211.164.20/packages/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:14.744 Response Code: HTTP/1.1 200 OK 00:00:14.745 Success: Status code 200 is in the accepted range: 200,404 00:00:14.745 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:02:29.394 [Pipeline] } 00:02:29.414 [Pipeline] // retry 00:02:29.421 [Pipeline] sh 00:02:29.705 + tar --no-same-owner -xf spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:02:32.253 [Pipeline] sh 00:02:32.538 + git -C spdk log --oneline -n5 00:02:32.538 717acfa62 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:02:32.538 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:32.538 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:02:32.538 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:32.538 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:02:32.558 [Pipeline] writeFile 00:02:32.576 [Pipeline] sh 00:02:32.862 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:32.873 [Pipeline] sh 00:02:33.152 + cat autorun-spdk.conf 00:02:33.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.152 SPDK_TEST_NVME=1 00:02:33.152 SPDK_TEST_FTL=1 00:02:33.152 SPDK_TEST_ISAL=1 00:02:33.152 SPDK_RUN_ASAN=1 00:02:33.152 SPDK_RUN_UBSAN=1 00:02:33.152 SPDK_TEST_XNVME=1 00:02:33.152 SPDK_TEST_NVME_FDP=1 00:02:33.152 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.160 RUN_NIGHTLY=0 00:02:33.162 [Pipeline] } 00:02:33.175 [Pipeline] // stage 00:02:33.189 [Pipeline] stage 00:02:33.191 [Pipeline] { (Run VM) 00:02:33.204 [Pipeline] sh 00:02:33.486 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:33.486 + echo 'Start stage prepare_nvme.sh' 00:02:33.486 Start stage prepare_nvme.sh 00:02:33.486 + [[ -n 1 ]] 00:02:33.486 + disk_prefix=ex1 00:02:33.486 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:02:33.487 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:02:33.487 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:02:33.487 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.487 ++ SPDK_TEST_NVME=1 00:02:33.487 ++ SPDK_TEST_FTL=1 00:02:33.487 ++ SPDK_TEST_ISAL=1 00:02:33.487 ++ SPDK_RUN_ASAN=1 00:02:33.487 ++ SPDK_RUN_UBSAN=1 00:02:33.487 ++ SPDK_TEST_XNVME=1 00:02:33.487 ++ SPDK_TEST_NVME_FDP=1 00:02:33.487 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.487 ++ RUN_NIGHTLY=0 00:02:33.487 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:02:33.487 + nvme_files=() 00:02:33.487 + declare -A nvme_files 00:02:33.487 + backend_dir=/var/lib/libvirt/images/backends 00:02:33.487 + nvme_files['nvme.img']=5G 00:02:33.487 + nvme_files['nvme-cmb.img']=5G 00:02:33.487 + nvme_files['nvme-multi0.img']=4G 00:02:33.487 + nvme_files['nvme-multi1.img']=4G 00:02:33.487 + nvme_files['nvme-multi2.img']=4G 00:02:33.487 + nvme_files['nvme-openstack.img']=8G 00:02:33.487 + nvme_files['nvme-zns.img']=5G 00:02:33.487 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:33.487 + (( SPDK_TEST_FTL == 1 )) 00:02:33.487 + nvme_files["nvme-ftl.img"]=6G 00:02:33.487 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:33.487 + nvme_files["nvme-fdp.img"]=1G 00:02:33.487 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:33.487 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:33.487 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:33.487 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:33.487 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:33.487 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.487 + for nvme in "${!nvme_files[@]}" 00:02:33.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:33.745 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.745 + for nvme in "${!nvme_files[@]}" 00:02:33.745 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:33.745 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.745 + for nvme in "${!nvme_files[@]}" 00:02:33.745 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:33.745 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:33.745 + for nvme in "${!nvme_files[@]}" 00:02:33.745 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:34.004 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:34.004 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:34.004 + echo 'End stage prepare_nvme.sh' 00:02:34.004 End stage prepare_nvme.sh 00:02:34.014 [Pipeline] sh 00:02:34.292 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:34.292 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:34.292 00:02:34.292 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:02:34.292 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:02:34.292 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:02:34.292 HELP=0 00:02:34.292 DRY_RUN=0 00:02:34.292 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:34.292 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:34.292 NVME_AUTO_CREATE=0 00:02:34.292 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:34.292 NVME_CMB=,,,, 00:02:34.292 NVME_PMR=,,,, 00:02:34.292 NVME_ZNS=,,,, 00:02:34.292 NVME_MS=true,,,, 00:02:34.292 NVME_FDP=,,,on, 00:02:34.292 SPDK_VAGRANT_DISTRO=fedora39 00:02:34.292 SPDK_VAGRANT_VMCPU=10 00:02:34.292 SPDK_VAGRANT_VMRAM=12288 00:02:34.292 SPDK_VAGRANT_PROVIDER=libvirt 00:02:34.292 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:34.292 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:34.292 SPDK_OPENSTACK_NETWORK=0 00:02:34.292 VAGRANT_PACKAGE_BOX=0 00:02:34.292 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:34.292 FORCE_DISTRO=true 00:02:34.292 VAGRANT_BOX_VERSION= 00:02:34.292 EXTRA_VAGRANTFILES= 00:02:34.292 NIC_MODEL=e1000 00:02:34.292 00:02:34.292 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:02:34.292 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:02:37.610 Bringing machine 'default' up with 'libvirt' provider... 00:02:38.545 ==> default: Creating image (snapshot of base box volume). 00:02:38.545 ==> default: Creating domain with the following settings... 00:02:38.545 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732090525_d2d813602725e2ec8911 00:02:38.545 ==> default: -- Domain type: kvm 00:02:38.545 ==> default: -- Cpus: 10 00:02:38.545 ==> default: -- Feature: acpi 00:02:38.545 ==> default: -- Feature: apic 00:02:38.545 ==> default: -- Feature: pae 00:02:38.545 ==> default: -- Memory: 12288M 00:02:38.545 ==> default: -- Memory Backing: hugepages: 00:02:38.545 ==> default: -- Management MAC: 00:02:38.545 ==> default: -- Loader: 00:02:38.545 ==> default: -- Nvram: 00:02:38.545 ==> default: -- Base box: spdk/fedora39 00:02:38.545 ==> default: -- Storage pool: default 00:02:38.545 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732090525_d2d813602725e2ec8911.img (20G) 00:02:38.545 ==> default: -- Volume Cache: default 00:02:38.545 ==> default: -- Kernel: 00:02:38.545 ==> default: -- Initrd: 00:02:38.545 ==> default: -- Graphics Type: vnc 00:02:38.545 ==> default: -- Graphics Port: -1 00:02:38.545 ==> default: -- Graphics IP: 127.0.0.1 00:02:38.545 ==> default: -- Graphics Password: Not defined 00:02:38.545 ==> default: -- Video Type: cirrus 00:02:38.545 ==> default: -- Video VRAM: 9216 00:02:38.545 ==> default: -- Sound Type: 00:02:38.545 ==> default: -- Keymap: en-us 00:02:38.545 ==> default: -- TPM Path: 00:02:38.545 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:38.545 ==> default: -- Command line args: 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:38.545 ==> default: -> value=-drive, 00:02:38.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:38.545 ==> default: -> value=-device, 00:02:38.545 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.804 ==> default: Creating shared folders metadata... 00:02:38.804 ==> default: Starting domain. 00:02:40.708 ==> default: Waiting for domain to get an IP address... 00:03:02.653 ==> default: Waiting for SSH to become available... 00:03:02.653 ==> default: Configuring and enabling network interfaces... 00:03:06.843 default: SSH address: 192.168.121.20:22 00:03:06.843 default: SSH username: vagrant 00:03:06.843 default: SSH auth method: private key 00:03:10.129 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:20.102 ==> default: Mounting SSHFS shared folder... 00:03:21.476 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:21.476 ==> default: Checking Mount.. 00:03:22.904 ==> default: Folder Successfully Mounted! 00:03:22.904 ==> default: Running provisioner: file... 00:03:24.280 default: ~/.gitconfig => .gitconfig 00:03:24.846 00:03:24.846 SUCCESS! 00:03:24.846 00:03:24.846 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:03:24.846 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:24.846 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:03:24.846 00:03:24.854 [Pipeline] } 00:03:24.868 [Pipeline] // stage 00:03:24.874 [Pipeline] dir 00:03:24.875 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:03:24.876 [Pipeline] { 00:03:24.889 [Pipeline] catchError 00:03:24.891 [Pipeline] { 00:03:24.904 [Pipeline] sh 00:03:25.183 + vagrant ssh-config --host vagrant 00:03:25.183 + sed -ne /^Host/,$p 00:03:25.183 + tee ssh_conf 00:03:28.470 Host vagrant 00:03:28.470 HostName 192.168.121.20 00:03:28.470 User vagrant 00:03:28.470 Port 22 00:03:28.470 UserKnownHostsFile /dev/null 00:03:28.470 StrictHostKeyChecking no 00:03:28.470 PasswordAuthentication no 00:03:28.470 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:28.470 IdentitiesOnly yes 00:03:28.470 LogLevel FATAL 00:03:28.470 ForwardAgent yes 00:03:28.470 ForwardX11 yes 00:03:28.470 00:03:28.484 [Pipeline] withEnv 00:03:28.486 [Pipeline] { 00:03:28.500 [Pipeline] sh 00:03:28.781 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:28.781 source /etc/os-release 00:03:28.781 [[ -e /image.version ]] && img=$(< /image.version) 00:03:28.781 # Minimal, systemd-like check. 00:03:28.781 if [[ -e /.dockerenv ]]; then 00:03:28.781 # Clear garbage from the node's name: 00:03:28.781 # agt-er_autotest_547-896 -> autotest_547-896 00:03:28.781 # $HOSTNAME is the actual container id 00:03:28.781 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:28.781 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:28.781 # We can assume this is a mount from a host where container is running, 00:03:28.781 # so fetch its hostname to easily identify the target swarm worker. 00:03:28.781 container="$(< /etc/hostname) ($agent)" 00:03:28.781 else 00:03:28.781 # Fallback 00:03:28.781 container=$agent 00:03:28.781 fi 00:03:28.781 fi 00:03:28.781 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:28.781 00:03:29.103 [Pipeline] } 00:03:29.119 [Pipeline] // withEnv 00:03:29.128 [Pipeline] setCustomBuildProperty 00:03:29.144 [Pipeline] stage 00:03:29.146 [Pipeline] { (Tests) 00:03:29.163 [Pipeline] sh 00:03:29.443 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:29.718 [Pipeline] sh 00:03:30.000 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:30.273 [Pipeline] timeout 00:03:30.274 Timeout set to expire in 50 min 00:03:30.277 [Pipeline] { 00:03:30.290 [Pipeline] sh 00:03:30.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:31.138 HEAD is now at 717acfa62 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:03:31.150 [Pipeline] sh 00:03:31.436 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:31.711 [Pipeline] sh 00:03:31.990 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:32.265 [Pipeline] sh 00:03:32.546 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:32.806 ++ readlink -f spdk_repo 00:03:32.806 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:32.806 + [[ -n /home/vagrant/spdk_repo ]] 00:03:32.806 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:32.806 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:32.806 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:32.806 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:32.806 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:32.806 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:32.806 + cd /home/vagrant/spdk_repo 00:03:32.806 + source /etc/os-release 00:03:32.806 ++ NAME='Fedora Linux' 00:03:32.806 ++ VERSION='39 (Cloud Edition)' 00:03:32.806 ++ ID=fedora 00:03:32.806 ++ VERSION_ID=39 00:03:32.806 ++ VERSION_CODENAME= 00:03:32.806 ++ PLATFORM_ID=platform:f39 00:03:32.806 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:32.806 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:32.806 ++ LOGO=fedora-logo-icon 00:03:32.806 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:32.806 ++ HOME_URL=https://fedoraproject.org/ 00:03:32.806 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:32.806 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:32.806 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:32.806 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:32.806 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:32.806 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:32.806 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:32.806 ++ SUPPORT_END=2024-11-12 00:03:32.806 ++ VARIANT='Cloud Edition' 00:03:32.806 ++ VARIANT_ID=cloud 00:03:32.806 + uname -a 00:03:32.806 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:32.806 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:33.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.658 Hugepages 00:03:33.658 node hugesize free / total 00:03:33.658 node0 1048576kB 0 / 0 00:03:33.658 node0 2048kB 0 / 0 00:03:33.658 00:03:33.658 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.658 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:33.658 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:33.658 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:33.658 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:33.658 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:33.658 + rm -f /tmp/spdk-ld-path 00:03:33.658 + source autorun-spdk.conf 00:03:33.658 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.658 ++ SPDK_TEST_NVME=1 00:03:33.658 ++ SPDK_TEST_FTL=1 00:03:33.658 ++ SPDK_TEST_ISAL=1 00:03:33.658 ++ SPDK_RUN_ASAN=1 00:03:33.658 ++ SPDK_RUN_UBSAN=1 00:03:33.658 ++ SPDK_TEST_XNVME=1 00:03:33.658 ++ SPDK_TEST_NVME_FDP=1 00:03:33.658 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.658 ++ RUN_NIGHTLY=0 00:03:33.658 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:33.658 + [[ -n '' ]] 00:03:33.658 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:33.658 + for M in /var/spdk/build-*-manifest.txt 00:03:33.658 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:33.658 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.658 + for M in /var/spdk/build-*-manifest.txt 00:03:33.658 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:33.658 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.934 + for M in /var/spdk/build-*-manifest.txt 00:03:33.934 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:33.934 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.934 ++ uname 00:03:33.934 + [[ Linux == \L\i\n\u\x ]] 00:03:33.934 + sudo dmesg -T 00:03:33.934 + sudo dmesg --clear 00:03:33.934 + dmesg_pid=5260 00:03:33.934 + sudo dmesg -Tw 00:03:33.934 + [[ Fedora Linux == FreeBSD ]] 00:03:33.934 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:33.934 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:33.934 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:33.934 + [[ -x /usr/src/fio-static/fio ]] 00:03:33.934 + export FIO_BIN=/usr/src/fio-static/fio 00:03:33.934 + FIO_BIN=/usr/src/fio-static/fio 00:03:33.934 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:33.934 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:33.934 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:33.934 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:33.934 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:33.934 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:33.934 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:33.934 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:33.934 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.935 08:16:21 -- common/autotest_common.sh@1637 -- $ [[ n == y ]] 00:03:33.935 08:16:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.935 08:16:21 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:33.935 08:16:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:33.935 08:16:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:34.193 08:16:21 -- common/autotest_common.sh@1637 -- $ [[ n == y ]] 00:03:34.193 08:16:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.193 08:16:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:34.193 08:16:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:34.193 08:16:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.193 08:16:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.193 08:16:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.193 08:16:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.193 08:16:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.193 08:16:21 -- paths/export.sh@5 -- $ export PATH 00:03:34.193 08:16:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.193 08:16:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.193 08:16:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:34.193 08:16:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732090581.XXXXXX 00:03:34.193 08:16:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732090581.fYiQ4v 00:03:34.193 08:16:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:34.193 08:16:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:34.193 08:16:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:34.193 08:16:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:34.193 08:16:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:34.193 08:16:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:34.193 08:16:21 -- common/autotest_common.sh@412 -- $ xtrace_disable 00:03:34.193 08:16:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:34.193 08:16:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:34.193 08:16:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:34.193 08:16:21 -- pm/common@17 -- $ local monitor 00:03:34.193 08:16:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.193 08:16:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.193 08:16:21 -- pm/common@25 -- $ sleep 1 00:03:34.193 08:16:21 -- pm/common@21 -- $ date +%s 00:03:34.193 08:16:21 -- pm/common@21 -- $ date +%s 00:03:34.193 08:16:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732090581 00:03:34.193 08:16:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732090581 00:03:34.193 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732090581_collect-cpu-load.pm.log 00:03:34.194 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732090581_collect-vmstat.pm.log 00:03:35.127 08:16:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:35.127 08:16:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:35.127 08:16:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:35.127 08:16:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:35.127 08:16:22 -- spdk/autobuild.sh@16 -- $ date -u 00:03:35.127 Wed Nov 20 08:16:22 AM UTC 2024 00:03:35.127 08:16:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:35.127 v25.01-pre-200-g717acfa62 00:03:35.127 08:16:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:35.127 08:16:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:35.127 08:16:22 -- common/autotest_common.sh@1108 -- $ '[' 3 -le 1 ']' 00:03:35.127 08:16:22 -- common/autotest_common.sh@1114 -- $ xtrace_disable 00:03:35.127 08:16:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.127 ************************************ 00:03:35.127 START TEST asan 00:03:35.127 ************************************ 00:03:35.127 using asan 00:03:35.127 08:16:22 asan -- common/autotest_common.sh@1132 -- $ echo 'using asan' 00:03:35.127 00:03:35.127 real 0m0.001s 00:03:35.127 user 0m0.000s 00:03:35.127 sys 0m0.000s 00:03:35.127 08:16:22 asan -- common/autotest_common.sh@1133 -- $ xtrace_disable 00:03:35.127 ************************************ 00:03:35.127 END TEST asan 00:03:35.127 ************************************ 00:03:35.127 08:16:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:35.450 08:16:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:35.450 08:16:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:35.450 08:16:22 -- common/autotest_common.sh@1108 -- $ '[' 3 -le 1 ']' 00:03:35.450 08:16:22 -- common/autotest_common.sh@1114 -- $ xtrace_disable 00:03:35.450 08:16:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.450 ************************************ 00:03:35.450 START TEST ubsan 00:03:35.450 ************************************ 00:03:35.450 using ubsan 00:03:35.450 08:16:22 ubsan -- common/autotest_common.sh@1132 -- $ echo 'using ubsan' 00:03:35.450 00:03:35.450 real 0m0.000s 00:03:35.450 user 0m0.000s 00:03:35.450 sys 0m0.000s 00:03:35.450 08:16:22 ubsan -- common/autotest_common.sh@1133 -- $ xtrace_disable 00:03:35.450 ************************************ 00:03:35.450 END TEST ubsan 00:03:35.450 ************************************ 00:03:35.450 08:16:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:35.450 08:16:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:35.450 08:16:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:35.450 08:16:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:35.450 08:16:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:35.450 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:35.450 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:36.016 Using 'verbs' RDMA provider 00:03:52.278 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:10.365 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:10.365 Creating mk/config.mk...done. 00:04:10.365 Creating mk/cc.flags.mk...done. 00:04:10.365 Type 'make' to build. 00:04:10.365 08:16:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:10.365 08:16:56 -- common/autotest_common.sh@1108 -- $ '[' 3 -le 1 ']' 00:04:10.365 08:16:56 -- common/autotest_common.sh@1114 -- $ xtrace_disable 00:04:10.365 08:16:56 -- common/autotest_common.sh@10 -- $ set +x 00:04:10.365 ************************************ 00:04:10.365 START TEST make 00:04:10.365 ************************************ 00:04:10.365 08:16:56 make -- common/autotest_common.sh@1132 -- $ make -j10 00:04:10.365 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:10.365 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:10.365 meson setup builddir \ 00:04:10.365 -Dwith-libaio=enabled \ 00:04:10.365 -Dwith-liburing=enabled \ 00:04:10.365 -Dwith-libvfn=disabled \ 00:04:10.365 -Dwith-spdk=disabled \ 00:04:10.365 -Dexamples=false \ 00:04:10.365 -Dtests=false \ 00:04:10.365 -Dtools=false && \ 00:04:10.365 meson compile -C builddir && \ 00:04:10.365 cd -) 00:04:10.365 make[1]: Nothing to be done for 'all'. 00:04:11.297 The Meson build system 00:04:11.297 Version: 1.5.0 00:04:11.297 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:11.297 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:11.297 Build type: native build 00:04:11.297 Project name: xnvme 00:04:11.297 Project version: 0.7.5 00:04:11.297 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:11.297 C linker for the host machine: cc ld.bfd 2.40-14 00:04:11.297 Host machine cpu family: x86_64 00:04:11.297 Host machine cpu: x86_64 00:04:11.297 Message: host_machine.system: linux 00:04:11.297 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:11.297 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:11.297 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:11.297 Run-time dependency threads found: YES 00:04:11.297 Has header "setupapi.h" : NO 00:04:11.297 Has header "linux/blkzoned.h" : YES 00:04:11.297 Has header "linux/blkzoned.h" : YES (cached) 00:04:11.297 Has header "libaio.h" : YES 00:04:11.297 Library aio found: YES 00:04:11.297 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:11.297 Run-time dependency liburing found: YES 2.2 00:04:11.297 Dependency libvfn skipped: feature with-libvfn disabled 00:04:11.297 Found CMake: /usr/bin/cmake (3.27.7) 00:04:11.297 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:11.297 Subproject spdk : skipped: feature with-spdk disabled 00:04:11.297 Run-time dependency appleframeworks found: NO (tried framework) 00:04:11.297 Run-time dependency appleframeworks found: NO (tried framework) 00:04:11.297 Library rt found: YES 00:04:11.297 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:11.297 Configuring xnvme_config.h using configuration 00:04:11.297 Configuring xnvme.spec using configuration 00:04:11.297 Run-time dependency bash-completion found: YES 2.11 00:04:11.297 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:11.297 Program cp found: YES (/usr/bin/cp) 00:04:11.297 Build targets in project: 3 00:04:11.297 00:04:11.297 xnvme 0.7.5 00:04:11.297 00:04:11.297 Subprojects 00:04:11.297 spdk : NO Feature 'with-spdk' disabled 00:04:11.297 00:04:11.297 User defined options 00:04:11.297 examples : false 00:04:11.297 tests : false 00:04:11.297 tools : false 00:04:11.297 with-libaio : enabled 00:04:11.297 with-liburing: enabled 00:04:11.297 with-libvfn : disabled 00:04:11.297 with-spdk : disabled 00:04:11.297 00:04:11.297 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:11.555 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:11.555 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:11.816 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:11.816 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:11.816 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:11.816 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:11.816 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:11.816 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:11.816 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:11.816 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:11.816 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:11.816 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:11.816 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:11.816 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:11.816 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:11.816 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:11.816 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:11.816 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:11.816 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:12.073 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:12.073 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:12.073 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:12.073 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:12.073 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:12.073 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:12.073 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:12.073 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:12.073 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:12.073 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:12.073 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:12.073 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:12.073 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:12.073 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:12.073 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:12.073 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:12.073 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:12.073 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:12.073 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:12.073 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:12.073 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:12.073 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:12.073 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:12.073 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:12.073 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:12.073 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:12.073 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:12.073 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:12.073 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:12.073 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:12.073 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:12.073 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:12.073 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:12.073 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:12.073 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:12.330 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:12.330 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:12.330 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:12.330 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:12.330 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:12.330 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:12.330 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:12.330 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:12.330 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:12.330 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:12.330 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:12.330 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:12.330 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:12.330 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:12.330 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:12.330 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:12.330 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:12.330 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:12.588 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:12.588 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:12.845 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:12.845 [75/76] Linking static target lib/libxnvme.a 00:04:12.845 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:12.845 INFO: autodetecting backend as ninja 00:04:12.845 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:12.845 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:20.956 The Meson build system 00:04:20.956 Version: 1.5.0 00:04:20.956 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:20.956 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:20.956 Build type: native build 00:04:20.956 Program cat found: YES (/usr/bin/cat) 00:04:20.956 Project name: DPDK 00:04:20.956 Project version: 24.03.0 00:04:20.956 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:20.956 C linker for the host machine: cc ld.bfd 2.40-14 00:04:20.956 Host machine cpu family: x86_64 00:04:20.956 Host machine cpu: x86_64 00:04:20.956 Message: ## Building in Developer Mode ## 00:04:20.956 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:20.956 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:20.956 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:20.956 Program python3 found: YES (/usr/bin/python3) 00:04:20.956 Program cat found: YES (/usr/bin/cat) 00:04:20.956 Compiler for C supports arguments -march=native: YES 00:04:20.956 Checking for size of "void *" : 8 00:04:20.956 Checking for size of "void *" : 8 (cached) 00:04:20.956 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:20.956 Library m found: YES 00:04:20.956 Library numa found: YES 00:04:20.956 Has header "numaif.h" : YES 00:04:20.956 Library fdt found: NO 00:04:20.956 Library execinfo found: NO 00:04:20.956 Has header "execinfo.h" : YES 00:04:20.956 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:20.956 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:20.956 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:20.956 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:20.956 Run-time dependency openssl found: YES 3.1.1 00:04:20.956 Run-time dependency libpcap found: YES 1.10.4 00:04:20.956 Has header "pcap.h" with dependency libpcap: YES 00:04:20.956 Compiler for C supports arguments -Wcast-qual: YES 00:04:20.956 Compiler for C supports arguments -Wdeprecated: YES 00:04:20.956 Compiler for C supports arguments -Wformat: YES 00:04:20.956 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:20.956 Compiler for C supports arguments -Wformat-security: NO 00:04:20.956 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:20.956 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:20.956 Compiler for C supports arguments -Wnested-externs: YES 00:04:20.956 Compiler for C supports arguments -Wold-style-definition: YES 00:04:20.956 Compiler for C supports arguments -Wpointer-arith: YES 00:04:20.956 Compiler for C supports arguments -Wsign-compare: YES 00:04:20.956 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:20.956 Compiler for C supports arguments -Wundef: YES 00:04:20.956 Compiler for C supports arguments -Wwrite-strings: YES 00:04:20.956 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:20.956 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:20.956 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:20.956 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:20.956 Program objdump found: YES (/usr/bin/objdump) 00:04:20.956 Compiler for C supports arguments -mavx512f: YES 00:04:20.956 Checking if "AVX512 checking" compiles: YES 00:04:20.956 Fetching value of define "__SSE4_2__" : 1 00:04:20.956 Fetching value of define "__AES__" : 1 00:04:20.956 Fetching value of define "__AVX__" : 1 00:04:20.956 Fetching value of define "__AVX2__" : 1 00:04:20.956 Fetching value of define "__AVX512BW__" : 1 00:04:20.956 Fetching value of define "__AVX512CD__" : 1 00:04:20.956 Fetching value of define "__AVX512DQ__" : 1 00:04:20.956 Fetching value of define "__AVX512F__" : 1 00:04:20.956 Fetching value of define "__AVX512VL__" : 1 00:04:20.956 Fetching value of define "__PCLMUL__" : 1 00:04:20.956 Fetching value of define "__RDRND__" : 1 00:04:20.956 Fetching value of define "__RDSEED__" : 1 00:04:20.956 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:20.956 Fetching value of define "__znver1__" : (undefined) 00:04:20.956 Fetching value of define "__znver2__" : (undefined) 00:04:20.956 Fetching value of define "__znver3__" : (undefined) 00:04:20.956 Fetching value of define "__znver4__" : (undefined) 00:04:20.956 Library asan found: YES 00:04:20.956 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:20.956 Message: lib/log: Defining dependency "log" 00:04:20.956 Message: lib/kvargs: Defining dependency "kvargs" 00:04:20.956 Message: lib/telemetry: Defining dependency "telemetry" 00:04:20.956 Library rt found: YES 00:04:20.956 Checking for function "getentropy" : NO 00:04:20.956 Message: lib/eal: Defining dependency "eal" 00:04:20.956 Message: lib/ring: Defining dependency "ring" 00:04:20.956 Message: lib/rcu: Defining dependency "rcu" 00:04:20.956 Message: lib/mempool: Defining dependency "mempool" 00:04:20.956 Message: lib/mbuf: Defining dependency "mbuf" 00:04:20.956 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:20.956 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:20.956 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:20.956 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:20.956 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:20.956 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:20.956 Compiler for C supports arguments -mpclmul: YES 00:04:20.956 Compiler for C supports arguments -maes: YES 00:04:20.956 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:20.956 Compiler for C supports arguments -mavx512bw: YES 00:04:20.956 Compiler for C supports arguments -mavx512dq: YES 00:04:20.956 Compiler for C supports arguments -mavx512vl: YES 00:04:20.956 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:20.956 Compiler for C supports arguments -mavx2: YES 00:04:20.956 Compiler for C supports arguments -mavx: YES 00:04:20.956 Message: lib/net: Defining dependency "net" 00:04:20.956 Message: lib/meter: Defining dependency "meter" 00:04:20.956 Message: lib/ethdev: Defining dependency "ethdev" 00:04:20.956 Message: lib/pci: Defining dependency "pci" 00:04:20.956 Message: lib/cmdline: Defining dependency "cmdline" 00:04:20.956 Message: lib/hash: Defining dependency "hash" 00:04:20.956 Message: lib/timer: Defining dependency "timer" 00:04:20.956 Message: lib/compressdev: Defining dependency "compressdev" 00:04:20.956 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:20.956 Message: lib/dmadev: Defining dependency "dmadev" 00:04:20.956 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:20.956 Message: lib/power: Defining dependency "power" 00:04:20.956 Message: lib/reorder: Defining dependency "reorder" 00:04:20.956 Message: lib/security: Defining dependency "security" 00:04:20.956 Has header "linux/userfaultfd.h" : YES 00:04:20.956 Has header "linux/vduse.h" : YES 00:04:20.956 Message: lib/vhost: Defining dependency "vhost" 00:04:20.956 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:20.956 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:20.956 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:20.956 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:20.956 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:20.956 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:20.956 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:20.956 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:20.956 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:20.956 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:20.956 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:20.956 Configuring doxy-api-html.conf using configuration 00:04:20.956 Configuring doxy-api-man.conf using configuration 00:04:20.956 Program mandb found: YES (/usr/bin/mandb) 00:04:20.956 Program sphinx-build found: NO 00:04:20.956 Configuring rte_build_config.h using configuration 00:04:20.956 Message: 00:04:20.956 ================= 00:04:20.956 Applications Enabled 00:04:20.956 ================= 00:04:20.956 00:04:20.956 apps: 00:04:20.956 00:04:20.956 00:04:20.956 Message: 00:04:20.956 ================= 00:04:20.956 Libraries Enabled 00:04:20.956 ================= 00:04:20.956 00:04:20.956 libs: 00:04:20.956 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:20.956 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:20.956 cryptodev, dmadev, power, reorder, security, vhost, 00:04:20.956 00:04:20.956 Message: 00:04:20.956 =============== 00:04:20.956 Drivers Enabled 00:04:20.956 =============== 00:04:20.956 00:04:20.956 common: 00:04:20.956 00:04:20.956 bus: 00:04:20.956 pci, vdev, 00:04:20.956 mempool: 00:04:20.956 ring, 00:04:20.956 dma: 00:04:20.956 00:04:20.956 net: 00:04:20.956 00:04:20.956 crypto: 00:04:20.956 00:04:20.956 compress: 00:04:20.956 00:04:20.956 vdpa: 00:04:20.956 00:04:20.956 00:04:20.956 Message: 00:04:20.956 ================= 00:04:20.956 Content Skipped 00:04:20.956 ================= 00:04:20.956 00:04:20.956 apps: 00:04:20.956 dumpcap: explicitly disabled via build config 00:04:20.956 graph: explicitly disabled via build config 00:04:20.956 pdump: explicitly disabled via build config 00:04:20.956 proc-info: explicitly disabled via build config 00:04:20.956 test-acl: explicitly disabled via build config 00:04:20.956 test-bbdev: explicitly disabled via build config 00:04:20.956 test-cmdline: explicitly disabled via build config 00:04:20.956 test-compress-perf: explicitly disabled via build config 00:04:20.956 test-crypto-perf: explicitly disabled via build config 00:04:20.956 test-dma-perf: explicitly disabled via build config 00:04:20.956 test-eventdev: explicitly disabled via build config 00:04:20.957 test-fib: explicitly disabled via build config 00:04:20.957 test-flow-perf: explicitly disabled via build config 00:04:20.957 test-gpudev: explicitly disabled via build config 00:04:20.957 test-mldev: explicitly disabled via build config 00:04:20.957 test-pipeline: explicitly disabled via build config 00:04:20.957 test-pmd: explicitly disabled via build config 00:04:20.957 test-regex: explicitly disabled via build config 00:04:20.957 test-sad: explicitly disabled via build config 00:04:20.957 test-security-perf: explicitly disabled via build config 00:04:20.957 00:04:20.957 libs: 00:04:20.957 argparse: explicitly disabled via build config 00:04:20.957 metrics: explicitly disabled via build config 00:04:20.957 acl: explicitly disabled via build config 00:04:20.957 bbdev: explicitly disabled via build config 00:04:20.957 bitratestats: explicitly disabled via build config 00:04:20.957 bpf: explicitly disabled via build config 00:04:20.957 cfgfile: explicitly disabled via build config 00:04:20.957 distributor: explicitly disabled via build config 00:04:20.957 efd: explicitly disabled via build config 00:04:20.957 eventdev: explicitly disabled via build config 00:04:20.957 dispatcher: explicitly disabled via build config 00:04:20.957 gpudev: explicitly disabled via build config 00:04:20.957 gro: explicitly disabled via build config 00:04:20.957 gso: explicitly disabled via build config 00:04:20.957 ip_frag: explicitly disabled via build config 00:04:20.957 jobstats: explicitly disabled via build config 00:04:20.957 latencystats: explicitly disabled via build config 00:04:20.957 lpm: explicitly disabled via build config 00:04:20.957 member: explicitly disabled via build config 00:04:20.957 pcapng: explicitly disabled via build config 00:04:20.957 rawdev: explicitly disabled via build config 00:04:20.957 regexdev: explicitly disabled via build config 00:04:20.957 mldev: explicitly disabled via build config 00:04:20.957 rib: explicitly disabled via build config 00:04:20.957 sched: explicitly disabled via build config 00:04:20.957 stack: explicitly disabled via build config 00:04:20.957 ipsec: explicitly disabled via build config 00:04:20.957 pdcp: explicitly disabled via build config 00:04:20.957 fib: explicitly disabled via build config 00:04:20.957 port: explicitly disabled via build config 00:04:20.957 pdump: explicitly disabled via build config 00:04:20.957 table: explicitly disabled via build config 00:04:20.957 pipeline: explicitly disabled via build config 00:04:20.957 graph: explicitly disabled via build config 00:04:20.957 node: explicitly disabled via build config 00:04:20.957 00:04:20.957 drivers: 00:04:20.957 common/cpt: not in enabled drivers build config 00:04:20.957 common/dpaax: not in enabled drivers build config 00:04:20.957 common/iavf: not in enabled drivers build config 00:04:20.957 common/idpf: not in enabled drivers build config 00:04:20.957 common/ionic: not in enabled drivers build config 00:04:20.957 common/mvep: not in enabled drivers build config 00:04:20.957 common/octeontx: not in enabled drivers build config 00:04:20.957 bus/auxiliary: not in enabled drivers build config 00:04:20.957 bus/cdx: not in enabled drivers build config 00:04:20.957 bus/dpaa: not in enabled drivers build config 00:04:20.957 bus/fslmc: not in enabled drivers build config 00:04:20.957 bus/ifpga: not in enabled drivers build config 00:04:20.957 bus/platform: not in enabled drivers build config 00:04:20.957 bus/uacce: not in enabled drivers build config 00:04:20.957 bus/vmbus: not in enabled drivers build config 00:04:20.957 common/cnxk: not in enabled drivers build config 00:04:20.957 common/mlx5: not in enabled drivers build config 00:04:20.957 common/nfp: not in enabled drivers build config 00:04:20.957 common/nitrox: not in enabled drivers build config 00:04:20.957 common/qat: not in enabled drivers build config 00:04:20.957 common/sfc_efx: not in enabled drivers build config 00:04:20.957 mempool/bucket: not in enabled drivers build config 00:04:20.957 mempool/cnxk: not in enabled drivers build config 00:04:20.957 mempool/dpaa: not in enabled drivers build config 00:04:20.957 mempool/dpaa2: not in enabled drivers build config 00:04:20.957 mempool/octeontx: not in enabled drivers build config 00:04:20.957 mempool/stack: not in enabled drivers build config 00:04:20.957 dma/cnxk: not in enabled drivers build config 00:04:20.957 dma/dpaa: not in enabled drivers build config 00:04:20.957 dma/dpaa2: not in enabled drivers build config 00:04:20.957 dma/hisilicon: not in enabled drivers build config 00:04:20.957 dma/idxd: not in enabled drivers build config 00:04:20.957 dma/ioat: not in enabled drivers build config 00:04:20.957 dma/skeleton: not in enabled drivers build config 00:04:20.957 net/af_packet: not in enabled drivers build config 00:04:20.957 net/af_xdp: not in enabled drivers build config 00:04:20.957 net/ark: not in enabled drivers build config 00:04:20.957 net/atlantic: not in enabled drivers build config 00:04:20.957 net/avp: not in enabled drivers build config 00:04:20.957 net/axgbe: not in enabled drivers build config 00:04:20.957 net/bnx2x: not in enabled drivers build config 00:04:20.957 net/bnxt: not in enabled drivers build config 00:04:20.957 net/bonding: not in enabled drivers build config 00:04:20.957 net/cnxk: not in enabled drivers build config 00:04:20.957 net/cpfl: not in enabled drivers build config 00:04:20.957 net/cxgbe: not in enabled drivers build config 00:04:20.957 net/dpaa: not in enabled drivers build config 00:04:20.957 net/dpaa2: not in enabled drivers build config 00:04:20.957 net/e1000: not in enabled drivers build config 00:04:20.957 net/ena: not in enabled drivers build config 00:04:20.957 net/enetc: not in enabled drivers build config 00:04:20.957 net/enetfec: not in enabled drivers build config 00:04:20.957 net/enic: not in enabled drivers build config 00:04:20.957 net/failsafe: not in enabled drivers build config 00:04:20.957 net/fm10k: not in enabled drivers build config 00:04:20.957 net/gve: not in enabled drivers build config 00:04:20.957 net/hinic: not in enabled drivers build config 00:04:20.957 net/hns3: not in enabled drivers build config 00:04:20.957 net/i40e: not in enabled drivers build config 00:04:20.957 net/iavf: not in enabled drivers build config 00:04:20.957 net/ice: not in enabled drivers build config 00:04:20.957 net/idpf: not in enabled drivers build config 00:04:20.957 net/igc: not in enabled drivers build config 00:04:20.957 net/ionic: not in enabled drivers build config 00:04:20.957 net/ipn3ke: not in enabled drivers build config 00:04:20.957 net/ixgbe: not in enabled drivers build config 00:04:20.957 net/mana: not in enabled drivers build config 00:04:20.957 net/memif: not in enabled drivers build config 00:04:20.957 net/mlx4: not in enabled drivers build config 00:04:20.957 net/mlx5: not in enabled drivers build config 00:04:20.957 net/mvneta: not in enabled drivers build config 00:04:20.957 net/mvpp2: not in enabled drivers build config 00:04:20.957 net/netvsc: not in enabled drivers build config 00:04:20.957 net/nfb: not in enabled drivers build config 00:04:20.957 net/nfp: not in enabled drivers build config 00:04:20.957 net/ngbe: not in enabled drivers build config 00:04:20.957 net/null: not in enabled drivers build config 00:04:20.957 net/octeontx: not in enabled drivers build config 00:04:20.957 net/octeon_ep: not in enabled drivers build config 00:04:20.957 net/pcap: not in enabled drivers build config 00:04:20.957 net/pfe: not in enabled drivers build config 00:04:20.957 net/qede: not in enabled drivers build config 00:04:20.957 net/ring: not in enabled drivers build config 00:04:20.957 net/sfc: not in enabled drivers build config 00:04:20.957 net/softnic: not in enabled drivers build config 00:04:20.957 net/tap: not in enabled drivers build config 00:04:20.957 net/thunderx: not in enabled drivers build config 00:04:20.957 net/txgbe: not in enabled drivers build config 00:04:20.957 net/vdev_netvsc: not in enabled drivers build config 00:04:20.957 net/vhost: not in enabled drivers build config 00:04:20.957 net/virtio: not in enabled drivers build config 00:04:20.957 net/vmxnet3: not in enabled drivers build config 00:04:20.957 raw/*: missing internal dependency, "rawdev" 00:04:20.957 crypto/armv8: not in enabled drivers build config 00:04:20.957 crypto/bcmfs: not in enabled drivers build config 00:04:20.957 crypto/caam_jr: not in enabled drivers build config 00:04:20.957 crypto/ccp: not in enabled drivers build config 00:04:20.957 crypto/cnxk: not in enabled drivers build config 00:04:20.957 crypto/dpaa_sec: not in enabled drivers build config 00:04:20.957 crypto/dpaa2_sec: not in enabled drivers build config 00:04:20.957 crypto/ipsec_mb: not in enabled drivers build config 00:04:20.957 crypto/mlx5: not in enabled drivers build config 00:04:20.957 crypto/mvsam: not in enabled drivers build config 00:04:20.957 crypto/nitrox: not in enabled drivers build config 00:04:20.957 crypto/null: not in enabled drivers build config 00:04:20.957 crypto/octeontx: not in enabled drivers build config 00:04:20.957 crypto/openssl: not in enabled drivers build config 00:04:20.957 crypto/scheduler: not in enabled drivers build config 00:04:20.957 crypto/uadk: not in enabled drivers build config 00:04:20.957 crypto/virtio: not in enabled drivers build config 00:04:20.957 compress/isal: not in enabled drivers build config 00:04:20.957 compress/mlx5: not in enabled drivers build config 00:04:20.957 compress/nitrox: not in enabled drivers build config 00:04:20.957 compress/octeontx: not in enabled drivers build config 00:04:20.957 compress/zlib: not in enabled drivers build config 00:04:20.957 regex/*: missing internal dependency, "regexdev" 00:04:20.957 ml/*: missing internal dependency, "mldev" 00:04:20.957 vdpa/ifc: not in enabled drivers build config 00:04:20.957 vdpa/mlx5: not in enabled drivers build config 00:04:20.957 vdpa/nfp: not in enabled drivers build config 00:04:20.957 vdpa/sfc: not in enabled drivers build config 00:04:20.957 event/*: missing internal dependency, "eventdev" 00:04:20.957 baseband/*: missing internal dependency, "bbdev" 00:04:20.957 gpu/*: missing internal dependency, "gpudev" 00:04:20.957 00:04:20.957 00:04:20.957 Build targets in project: 85 00:04:20.957 00:04:20.957 DPDK 24.03.0 00:04:20.957 00:04:20.957 User defined options 00:04:20.957 buildtype : debug 00:04:20.957 default_library : shared 00:04:20.957 libdir : lib 00:04:20.958 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:20.958 b_sanitize : address 00:04:20.958 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:20.958 c_link_args : 00:04:20.958 cpu_instruction_set: native 00:04:20.958 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:20.958 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:20.958 enable_docs : false 00:04:20.958 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:20.958 enable_kmods : false 00:04:20.958 max_lcores : 128 00:04:20.958 tests : false 00:04:20.958 00:04:20.958 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:21.217 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:21.217 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:21.217 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:21.217 [3/268] Linking static target lib/librte_kvargs.a 00:04:21.217 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:21.217 [5/268] Linking static target lib/librte_log.a 00:04:21.217 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:21.789 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:21.789 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:21.789 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:21.789 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:21.789 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:21.789 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:21.789 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:21.789 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.047 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:22.047 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:22.047 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:22.047 [18/268] Linking static target lib/librte_telemetry.a 00:04:22.305 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:22.305 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.305 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:22.305 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:22.305 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:22.305 [24/268] Linking target lib/librte_log.so.24.1 00:04:22.564 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:22.564 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:22.564 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:22.564 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:22.564 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:22.564 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:22.822 [31/268] Linking target lib/librte_kvargs.so.24.1 00:04:22.822 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:22.822 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:23.092 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.092 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:23.092 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:23.092 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:23.092 [38/268] Linking target lib/librte_telemetry.so.24.1 00:04:23.092 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:23.092 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:23.092 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:23.349 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:23.349 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:23.350 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:23.350 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:23.350 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:23.609 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:23.609 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:23.609 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:23.609 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:23.867 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:23.867 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:23.867 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:23.867 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:24.125 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:24.125 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:24.125 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:24.384 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:24.384 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:24.384 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:24.384 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:24.384 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:24.384 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:24.384 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:24.643 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:24.643 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:24.643 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:24.900 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:24.901 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:25.159 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:25.159 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:25.159 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:25.159 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:25.159 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:25.159 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:25.159 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:25.159 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:25.159 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:25.159 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:25.417 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:25.417 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:25.417 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:25.417 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:25.675 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:25.675 [85/268] Linking static target lib/librte_ring.a 00:04:25.675 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:25.675 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:25.675 [88/268] Linking static target lib/librte_eal.a 00:04:25.675 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:25.934 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:25.934 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:25.934 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:25.934 [93/268] Linking static target lib/librte_mempool.a 00:04:26.194 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:26.194 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.194 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:26.194 [97/268] Linking static target lib/librte_rcu.a 00:04:26.194 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:26.194 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:26.194 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:26.453 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:26.453 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:26.711 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:26.711 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:26.711 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:26.711 [106/268] Linking static target lib/librte_mbuf.a 00:04:26.711 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:26.711 [108/268] Linking static target lib/librte_net.a 00:04:26.711 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.970 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:26.970 [111/268] Linking static target lib/librte_meter.a 00:04:26.970 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:27.229 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:27.229 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:27.229 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.229 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:27.229 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.488 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.746 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:27.746 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:28.023 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:28.023 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.023 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:28.282 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:28.282 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:28.282 [126/268] Linking static target lib/librte_pci.a 00:04:28.282 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:28.282 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:28.540 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:28.540 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:28.540 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:28.540 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:28.540 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:28.540 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:28.540 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:28.799 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:28.799 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:28.799 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.799 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:28.799 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:28.799 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:28.799 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:28.799 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:28.799 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:29.057 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:29.057 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:29.057 [147/268] Linking static target lib/librte_cmdline.a 00:04:29.316 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:29.316 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:29.316 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:29.316 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:29.316 [152/268] Linking static target lib/librte_timer.a 00:04:29.575 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:29.575 [154/268] Linking static target lib/librte_ethdev.a 00:04:29.575 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:29.575 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:29.575 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:29.834 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:29.834 [159/268] Linking static target lib/librte_compressdev.a 00:04:30.093 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:30.093 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.093 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:30.351 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:30.351 [164/268] Linking static target lib/librte_hash.a 00:04:30.351 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:30.351 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:30.351 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:30.609 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:30.609 [169/268] Linking static target lib/librte_dmadev.a 00:04:30.609 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:30.867 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.868 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:30.868 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:30.868 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.125 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:31.382 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:31.382 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:31.382 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:31.382 [179/268] Linking static target lib/librte_cryptodev.a 00:04:31.382 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:31.382 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:31.382 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:31.640 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.640 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.897 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:31.897 [186/268] Linking static target lib/librte_power.a 00:04:31.897 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:31.897 [188/268] Linking static target lib/librte_reorder.a 00:04:32.155 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:32.155 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:32.155 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:32.156 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:32.156 [193/268] Linking static target lib/librte_security.a 00:04:32.722 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.722 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:33.038 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.038 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:33.297 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.297 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:33.297 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:33.297 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:33.555 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:33.555 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:33.814 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:33.814 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:33.814 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:33.814 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:33.814 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:33.814 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.072 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:34.072 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:34.072 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:34.072 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:34.331 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:34.331 [215/268] Linking static target drivers/librte_bus_pci.a 00:04:34.331 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:34.331 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:34.331 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:34.331 [219/268] Linking static target drivers/librte_bus_vdev.a 00:04:34.331 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:34.331 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:34.590 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:34.590 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:34.590 [224/268] Linking static target drivers/librte_mempool_ring.a 00:04:34.590 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.590 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:34.849 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.417 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:39.607 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.607 [230/268] Linking target lib/librte_eal.so.24.1 00:04:39.607 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:39.607 [232/268] Linking target lib/librte_dmadev.so.24.1 00:04:39.607 [233/268] Linking target lib/librte_ring.so.24.1 00:04:39.607 [234/268] Linking target lib/librte_meter.so.24.1 00:04:39.607 [235/268] Linking target lib/librte_pci.so.24.1 00:04:39.607 [236/268] Linking target lib/librte_timer.so.24.1 00:04:39.607 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:39.607 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:39.607 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:39.607 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:39.607 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:39.607 [242/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.607 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:39.607 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:39.607 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:39.607 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:39.607 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:39.607 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:39.607 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:39.607 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:39.864 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:39.865 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:04:39.865 [253/268] Linking target lib/librte_net.so.24.1 00:04:39.865 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:39.865 [255/268] Linking target lib/librte_reorder.so.24.1 00:04:39.865 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:39.865 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:40.123 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:40.123 [259/268] Linking target lib/librte_hash.so.24.1 00:04:40.123 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:40.123 [261/268] Linking target lib/librte_security.so.24.1 00:04:40.123 [262/268] Linking static target lib/librte_vhost.a 00:04:40.123 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:40.123 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:40.123 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:40.381 [266/268] Linking target lib/librte_power.so.24.1 00:04:42.914 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.914 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:42.914 INFO: autodetecting backend as ninja 00:04:42.914 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:01.019 CC lib/ut_mock/mock.o 00:05:01.019 CC lib/ut/ut.o 00:05:01.019 CC lib/log/log.o 00:05:01.019 CC lib/log/log_flags.o 00:05:01.019 CC lib/log/log_deprecated.o 00:05:01.019 LIB libspdk_ut.a 00:05:01.019 LIB libspdk_ut_mock.a 00:05:01.019 SO libspdk_ut.so.2.0 00:05:01.019 SO libspdk_ut_mock.so.6.0 00:05:01.019 LIB libspdk_log.a 00:05:01.019 SYMLINK libspdk_ut.so 00:05:01.019 SO libspdk_log.so.7.1 00:05:01.019 SYMLINK libspdk_ut_mock.so 00:05:01.019 SYMLINK libspdk_log.so 00:05:01.019 CC lib/util/base64.o 00:05:01.019 CC lib/util/bit_array.o 00:05:01.019 CC lib/util/crc16.o 00:05:01.019 CC lib/util/cpuset.o 00:05:01.019 CC lib/util/crc32c.o 00:05:01.019 CC lib/util/crc32.o 00:05:01.019 CXX lib/trace_parser/trace.o 00:05:01.019 CC lib/dma/dma.o 00:05:01.019 CC lib/ioat/ioat.o 00:05:01.019 CC lib/vfio_user/host/vfio_user_pci.o 00:05:01.019 CC lib/vfio_user/host/vfio_user.o 00:05:01.019 CC lib/util/crc32_ieee.o 00:05:01.019 CC lib/util/crc64.o 00:05:01.019 CC lib/util/dif.o 00:05:01.019 CC lib/util/fd.o 00:05:01.019 CC lib/util/fd_group.o 00:05:01.019 LIB libspdk_dma.a 00:05:01.019 SO libspdk_dma.so.5.0 00:05:01.019 CC lib/util/file.o 00:05:01.019 CC lib/util/hexlify.o 00:05:01.019 LIB libspdk_ioat.a 00:05:01.019 SYMLINK libspdk_dma.so 00:05:01.019 CC lib/util/iov.o 00:05:01.019 SO libspdk_ioat.so.7.0 00:05:01.019 CC lib/util/math.o 00:05:01.019 CC lib/util/net.o 00:05:01.019 LIB libspdk_vfio_user.a 00:05:01.019 SYMLINK libspdk_ioat.so 00:05:01.019 CC lib/util/pipe.o 00:05:01.019 SO libspdk_vfio_user.so.5.0 00:05:01.019 CC lib/util/strerror_tls.o 00:05:01.019 CC lib/util/string.o 00:05:01.019 SYMLINK libspdk_vfio_user.so 00:05:01.019 CC lib/util/uuid.o 00:05:01.019 CC lib/util/xor.o 00:05:01.019 CC lib/util/zipf.o 00:05:01.019 CC lib/util/md5.o 00:05:01.019 LIB libspdk_util.a 00:05:01.019 SO libspdk_util.so.10.1 00:05:01.019 LIB libspdk_trace_parser.a 00:05:01.019 SO libspdk_trace_parser.so.6.0 00:05:01.019 SYMLINK libspdk_util.so 00:05:01.019 SYMLINK libspdk_trace_parser.so 00:05:01.019 CC lib/rdma_utils/rdma_utils.o 00:05:01.019 CC lib/conf/conf.o 00:05:01.019 CC lib/vmd/led.o 00:05:01.019 CC lib/vmd/vmd.o 00:05:01.019 CC lib/json/json_parse.o 00:05:01.019 CC lib/json/json_util.o 00:05:01.019 CC lib/json/json_write.o 00:05:01.019 CC lib/env_dpdk/env.o 00:05:01.019 CC lib/idxd/idxd.o 00:05:01.019 CC lib/idxd/idxd_user.o 00:05:01.019 CC lib/env_dpdk/memory.o 00:05:01.019 CC lib/env_dpdk/pci.o 00:05:01.019 CC lib/idxd/idxd_kernel.o 00:05:01.019 CC lib/env_dpdk/init.o 00:05:01.019 LIB libspdk_rdma_utils.a 00:05:01.019 LIB libspdk_conf.a 00:05:01.019 SO libspdk_conf.so.6.0 00:05:01.019 SO libspdk_rdma_utils.so.1.0 00:05:01.019 LIB libspdk_json.a 00:05:01.019 SYMLINK libspdk_conf.so 00:05:01.019 CC lib/env_dpdk/threads.o 00:05:01.019 SYMLINK libspdk_rdma_utils.so 00:05:01.019 CC lib/env_dpdk/pci_ioat.o 00:05:01.019 SO libspdk_json.so.6.0 00:05:01.020 CC lib/env_dpdk/pci_virtio.o 00:05:01.020 SYMLINK libspdk_json.so 00:05:01.020 CC lib/env_dpdk/pci_vmd.o 00:05:01.020 CC lib/env_dpdk/pci_idxd.o 00:05:01.020 CC lib/rdma_provider/common.o 00:05:01.020 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:01.020 CC lib/env_dpdk/pci_event.o 00:05:01.020 CC lib/env_dpdk/sigbus_handler.o 00:05:01.020 CC lib/jsonrpc/jsonrpc_server.o 00:05:01.020 CC lib/env_dpdk/pci_dpdk.o 00:05:01.020 LIB libspdk_idxd.a 00:05:01.020 LIB libspdk_vmd.a 00:05:01.020 SO libspdk_idxd.so.12.1 00:05:01.020 SO libspdk_vmd.so.6.0 00:05:01.020 SYMLINK libspdk_idxd.so 00:05:01.020 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:01.020 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:01.020 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:01.020 SYMLINK libspdk_vmd.so 00:05:01.020 CC lib/jsonrpc/jsonrpc_client.o 00:05:01.020 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:01.020 LIB libspdk_rdma_provider.a 00:05:01.020 SO libspdk_rdma_provider.so.7.0 00:05:01.020 SYMLINK libspdk_rdma_provider.so 00:05:01.278 LIB libspdk_jsonrpc.a 00:05:01.278 SO libspdk_jsonrpc.so.6.0 00:05:01.537 SYMLINK libspdk_jsonrpc.so 00:05:01.794 LIB libspdk_env_dpdk.a 00:05:01.794 CC lib/rpc/rpc.o 00:05:01.794 SO libspdk_env_dpdk.so.15.1 00:05:02.053 LIB libspdk_rpc.a 00:05:02.053 SYMLINK libspdk_env_dpdk.so 00:05:02.053 SO libspdk_rpc.so.6.0 00:05:02.311 SYMLINK libspdk_rpc.so 00:05:02.570 CC lib/notify/notify_rpc.o 00:05:02.570 CC lib/notify/notify.o 00:05:02.570 CC lib/keyring/keyring.o 00:05:02.570 CC lib/keyring/keyring_rpc.o 00:05:02.570 CC lib/trace/trace.o 00:05:02.570 CC lib/trace/trace_rpc.o 00:05:02.570 CC lib/trace/trace_flags.o 00:05:02.830 LIB libspdk_notify.a 00:05:02.830 SO libspdk_notify.so.6.0 00:05:02.830 LIB libspdk_keyring.a 00:05:02.830 SYMLINK libspdk_notify.so 00:05:02.830 SO libspdk_keyring.so.2.0 00:05:02.830 LIB libspdk_trace.a 00:05:02.830 SYMLINK libspdk_keyring.so 00:05:02.830 SO libspdk_trace.so.11.0 00:05:03.089 SYMLINK libspdk_trace.so 00:05:03.349 CC lib/sock/sock.o 00:05:03.349 CC lib/sock/sock_rpc.o 00:05:03.349 CC lib/thread/thread.o 00:05:03.349 CC lib/thread/iobuf.o 00:05:03.916 LIB libspdk_sock.a 00:05:03.916 SO libspdk_sock.so.10.0 00:05:03.916 SYMLINK libspdk_sock.so 00:05:04.484 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:04.485 CC lib/nvme/nvme_ctrlr.o 00:05:04.485 CC lib/nvme/nvme_fabric.o 00:05:04.485 CC lib/nvme/nvme_ns_cmd.o 00:05:04.485 CC lib/nvme/nvme_ns.o 00:05:04.485 CC lib/nvme/nvme_pcie_common.o 00:05:04.485 CC lib/nvme/nvme_pcie.o 00:05:04.485 CC lib/nvme/nvme_qpair.o 00:05:04.485 CC lib/nvme/nvme.o 00:05:05.052 CC lib/nvme/nvme_quirks.o 00:05:05.052 CC lib/nvme/nvme_transport.o 00:05:05.052 LIB libspdk_thread.a 00:05:05.052 SO libspdk_thread.so.11.0 00:05:05.052 CC lib/nvme/nvme_discovery.o 00:05:05.311 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:05.311 SYMLINK libspdk_thread.so 00:05:05.311 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:05.311 CC lib/nvme/nvme_tcp.o 00:05:05.311 CC lib/nvme/nvme_opal.o 00:05:05.311 CC lib/nvme/nvme_io_msg.o 00:05:05.571 CC lib/nvme/nvme_poll_group.o 00:05:05.571 CC lib/nvme/nvme_zns.o 00:05:05.571 CC lib/nvme/nvme_stubs.o 00:05:05.831 CC lib/nvme/nvme_auth.o 00:05:05.831 CC lib/nvme/nvme_cuse.o 00:05:05.831 CC lib/accel/accel.o 00:05:05.831 CC lib/accel/accel_rpc.o 00:05:05.831 CC lib/accel/accel_sw.o 00:05:06.431 CC lib/nvme/nvme_rdma.o 00:05:06.431 CC lib/blob/blobstore.o 00:05:06.431 CC lib/init/json_config.o 00:05:06.431 CC lib/virtio/virtio.o 00:05:06.431 CC lib/fsdev/fsdev.o 00:05:06.690 CC lib/init/subsystem.o 00:05:06.690 CC lib/virtio/virtio_vhost_user.o 00:05:06.690 CC lib/virtio/virtio_vfio_user.o 00:05:06.690 CC lib/init/subsystem_rpc.o 00:05:06.949 CC lib/virtio/virtio_pci.o 00:05:06.949 CC lib/blob/request.o 00:05:06.949 CC lib/fsdev/fsdev_io.o 00:05:06.949 CC lib/init/rpc.o 00:05:06.949 CC lib/fsdev/fsdev_rpc.o 00:05:06.949 CC lib/blob/zeroes.o 00:05:06.949 CC lib/blob/blob_bs_dev.o 00:05:06.949 LIB libspdk_accel.a 00:05:07.208 LIB libspdk_init.a 00:05:07.208 SO libspdk_accel.so.16.0 00:05:07.208 LIB libspdk_virtio.a 00:05:07.208 SO libspdk_init.so.6.0 00:05:07.208 SO libspdk_virtio.so.7.0 00:05:07.208 SYMLINK libspdk_accel.so 00:05:07.208 SYMLINK libspdk_init.so 00:05:07.208 SYMLINK libspdk_virtio.so 00:05:07.467 LIB libspdk_fsdev.a 00:05:07.467 SO libspdk_fsdev.so.2.0 00:05:07.467 SYMLINK libspdk_fsdev.so 00:05:07.467 CC lib/bdev/bdev_zone.o 00:05:07.467 CC lib/bdev/bdev.o 00:05:07.467 CC lib/bdev/scsi_nvme.o 00:05:07.467 CC lib/bdev/bdev_rpc.o 00:05:07.467 CC lib/bdev/part.o 00:05:07.467 CC lib/event/app.o 00:05:07.467 CC lib/event/reactor.o 00:05:07.726 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:07.726 LIB libspdk_nvme.a 00:05:07.726 CC lib/event/log_rpc.o 00:05:07.726 CC lib/event/app_rpc.o 00:05:07.726 CC lib/event/scheduler_static.o 00:05:07.985 SO libspdk_nvme.so.15.0 00:05:08.244 LIB libspdk_event.a 00:05:08.244 SO libspdk_event.so.14.0 00:05:08.244 SYMLINK libspdk_nvme.so 00:05:08.244 SYMLINK libspdk_event.so 00:05:08.530 LIB libspdk_fuse_dispatcher.a 00:05:08.530 SO libspdk_fuse_dispatcher.so.1.0 00:05:08.530 SYMLINK libspdk_fuse_dispatcher.so 00:05:09.908 LIB libspdk_blob.a 00:05:10.167 SO libspdk_blob.so.11.0 00:05:10.167 SYMLINK libspdk_blob.so 00:05:10.425 LIB libspdk_bdev.a 00:05:10.685 CC lib/blobfs/blobfs.o 00:05:10.685 CC lib/blobfs/tree.o 00:05:10.685 CC lib/lvol/lvol.o 00:05:10.685 SO libspdk_bdev.so.17.0 00:05:10.685 SYMLINK libspdk_bdev.so 00:05:10.944 CC lib/nvmf/ctrlr.o 00:05:10.944 CC lib/nvmf/ctrlr_discovery.o 00:05:10.944 CC lib/nvmf/ctrlr_bdev.o 00:05:10.944 CC lib/nvmf/subsystem.o 00:05:10.944 CC lib/ublk/ublk.o 00:05:10.944 CC lib/scsi/dev.o 00:05:10.944 CC lib/ftl/ftl_core.o 00:05:10.944 CC lib/nbd/nbd.o 00:05:11.203 CC lib/scsi/lun.o 00:05:11.462 CC lib/ftl/ftl_init.o 00:05:11.462 CC lib/nbd/nbd_rpc.o 00:05:11.462 CC lib/ftl/ftl_layout.o 00:05:11.462 LIB libspdk_blobfs.a 00:05:11.462 SO libspdk_blobfs.so.10.0 00:05:11.462 CC lib/scsi/port.o 00:05:11.462 SYMLINK libspdk_blobfs.so 00:05:11.462 CC lib/nvmf/nvmf.o 00:05:11.462 LIB libspdk_lvol.a 00:05:11.722 LIB libspdk_nbd.a 00:05:11.722 SO libspdk_lvol.so.10.0 00:05:11.722 CC lib/nvmf/nvmf_rpc.o 00:05:11.722 SO libspdk_nbd.so.7.0 00:05:11.722 CC lib/ublk/ublk_rpc.o 00:05:11.722 SYMLINK libspdk_lvol.so 00:05:11.722 CC lib/ftl/ftl_debug.o 00:05:11.722 CC lib/ftl/ftl_io.o 00:05:11.722 CC lib/scsi/scsi.o 00:05:11.722 SYMLINK libspdk_nbd.so 00:05:11.722 CC lib/scsi/scsi_bdev.o 00:05:11.722 CC lib/scsi/scsi_pr.o 00:05:11.722 LIB libspdk_ublk.a 00:05:11.722 CC lib/scsi/scsi_rpc.o 00:05:11.981 SO libspdk_ublk.so.3.0 00:05:11.981 CC lib/scsi/task.o 00:05:11.981 CC lib/ftl/ftl_sb.o 00:05:11.981 SYMLINK libspdk_ublk.so 00:05:11.981 CC lib/ftl/ftl_l2p.o 00:05:11.981 CC lib/ftl/ftl_l2p_flat.o 00:05:11.981 CC lib/nvmf/transport.o 00:05:12.240 CC lib/ftl/ftl_nv_cache.o 00:05:12.240 CC lib/nvmf/tcp.o 00:05:12.240 CC lib/nvmf/stubs.o 00:05:12.240 CC lib/ftl/ftl_band.o 00:05:12.240 LIB libspdk_scsi.a 00:05:12.240 CC lib/nvmf/mdns_server.o 00:05:12.240 SO libspdk_scsi.so.9.0 00:05:12.498 SYMLINK libspdk_scsi.so 00:05:12.498 CC lib/nvmf/rdma.o 00:05:12.498 CC lib/nvmf/auth.o 00:05:12.757 CC lib/iscsi/conn.o 00:05:12.757 CC lib/iscsi/init_grp.o 00:05:12.757 CC lib/vhost/vhost.o 00:05:12.757 CC lib/vhost/vhost_rpc.o 00:05:13.016 CC lib/vhost/vhost_scsi.o 00:05:13.016 CC lib/vhost/vhost_blk.o 00:05:13.016 CC lib/iscsi/iscsi.o 00:05:13.276 CC lib/ftl/ftl_band_ops.o 00:05:13.276 CC lib/iscsi/param.o 00:05:13.535 CC lib/vhost/rte_vhost_user.o 00:05:13.535 CC lib/iscsi/portal_grp.o 00:05:13.535 CC lib/ftl/ftl_writer.o 00:05:13.535 CC lib/iscsi/tgt_node.o 00:05:13.794 CC lib/iscsi/iscsi_subsystem.o 00:05:13.794 CC lib/iscsi/iscsi_rpc.o 00:05:13.794 CC lib/ftl/ftl_rq.o 00:05:13.794 CC lib/ftl/ftl_reloc.o 00:05:14.053 CC lib/ftl/ftl_l2p_cache.o 00:05:14.053 CC lib/ftl/ftl_p2l.o 00:05:14.053 CC lib/iscsi/task.o 00:05:14.053 CC lib/ftl/ftl_p2l_log.o 00:05:14.053 CC lib/ftl/mngt/ftl_mngt.o 00:05:14.312 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:14.312 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:14.312 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:14.312 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:14.312 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:14.571 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:14.572 LIB libspdk_vhost.a 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:14.572 SO libspdk_vhost.so.8.0 00:05:14.572 LIB libspdk_iscsi.a 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:14.572 SO libspdk_iscsi.so.8.0 00:05:14.572 SYMLINK libspdk_vhost.so 00:05:14.572 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:14.572 CC lib/ftl/utils/ftl_conf.o 00:05:14.830 CC lib/ftl/utils/ftl_md.o 00:05:14.830 CC lib/ftl/utils/ftl_mempool.o 00:05:14.830 CC lib/ftl/utils/ftl_bitmap.o 00:05:14.830 CC lib/ftl/utils/ftl_property.o 00:05:14.830 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:14.830 SYMLINK libspdk_iscsi.so 00:05:14.830 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:14.830 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:14.830 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:15.089 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:15.089 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:15.089 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:15.089 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:15.089 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:15.089 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:15.089 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:15.089 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:15.089 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:15.089 CC lib/ftl/base/ftl_base_dev.o 00:05:15.348 CC lib/ftl/base/ftl_base_bdev.o 00:05:15.348 CC lib/ftl/ftl_trace.o 00:05:15.608 LIB libspdk_ftl.a 00:05:15.608 LIB libspdk_nvmf.a 00:05:15.868 SO libspdk_ftl.so.9.0 00:05:15.868 SO libspdk_nvmf.so.20.0 00:05:16.127 SYMLINK libspdk_nvmf.so 00:05:16.127 SYMLINK libspdk_ftl.so 00:05:16.696 CC module/env_dpdk/env_dpdk_rpc.o 00:05:16.696 CC module/keyring/file/keyring.o 00:05:16.696 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:16.696 CC module/sock/posix/posix.o 00:05:16.696 CC module/blob/bdev/blob_bdev.o 00:05:16.696 CC module/accel/error/accel_error.o 00:05:16.696 CC module/fsdev/aio/fsdev_aio.o 00:05:16.696 CC module/keyring/linux/keyring.o 00:05:16.696 CC module/scheduler/gscheduler/gscheduler.o 00:05:16.696 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:16.696 LIB libspdk_env_dpdk_rpc.a 00:05:16.696 SO libspdk_env_dpdk_rpc.so.6.0 00:05:16.696 SYMLINK libspdk_env_dpdk_rpc.so 00:05:16.955 CC module/keyring/linux/keyring_rpc.o 00:05:16.955 CC module/keyring/file/keyring_rpc.o 00:05:16.955 LIB libspdk_scheduler_dpdk_governor.a 00:05:16.955 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:16.955 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:16.955 LIB libspdk_scheduler_gscheduler.a 00:05:16.955 CC module/accel/error/accel_error_rpc.o 00:05:16.955 SO libspdk_scheduler_gscheduler.so.4.0 00:05:16.955 LIB libspdk_scheduler_dynamic.a 00:05:16.955 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:16.955 SO libspdk_scheduler_dynamic.so.4.0 00:05:16.955 LIB libspdk_keyring_linux.a 00:05:16.955 LIB libspdk_blob_bdev.a 00:05:16.955 LIB libspdk_keyring_file.a 00:05:16.955 SO libspdk_keyring_linux.so.1.0 00:05:16.955 SO libspdk_blob_bdev.so.11.0 00:05:16.955 SYMLINK libspdk_scheduler_gscheduler.so 00:05:16.955 SO libspdk_keyring_file.so.2.0 00:05:16.955 CC module/fsdev/aio/linux_aio_mgr.o 00:05:16.955 SYMLINK libspdk_scheduler_dynamic.so 00:05:16.955 SYMLINK libspdk_keyring_linux.so 00:05:16.955 LIB libspdk_accel_error.a 00:05:16.955 SYMLINK libspdk_blob_bdev.so 00:05:17.213 SYMLINK libspdk_keyring_file.so 00:05:17.213 SO libspdk_accel_error.so.2.0 00:05:17.213 CC module/accel/ioat/accel_ioat_rpc.o 00:05:17.213 CC module/accel/ioat/accel_ioat.o 00:05:17.213 SYMLINK libspdk_accel_error.so 00:05:17.213 CC module/accel/dsa/accel_dsa.o 00:05:17.213 CC module/accel/iaa/accel_iaa.o 00:05:17.213 CC module/accel/iaa/accel_iaa_rpc.o 00:05:17.213 CC module/accel/dsa/accel_dsa_rpc.o 00:05:17.213 LIB libspdk_accel_ioat.a 00:05:17.213 CC module/bdev/delay/vbdev_delay.o 00:05:17.471 CC module/bdev/error/vbdev_error.o 00:05:17.471 SO libspdk_accel_ioat.so.6.0 00:05:17.471 CC module/blobfs/bdev/blobfs_bdev.o 00:05:17.471 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:17.471 LIB libspdk_fsdev_aio.a 00:05:17.471 SYMLINK libspdk_accel_ioat.so 00:05:17.471 SO libspdk_fsdev_aio.so.1.0 00:05:17.471 LIB libspdk_accel_iaa.a 00:05:17.471 LIB libspdk_sock_posix.a 00:05:17.471 SO libspdk_accel_iaa.so.3.0 00:05:17.471 LIB libspdk_accel_dsa.a 00:05:17.471 SO libspdk_sock_posix.so.6.0 00:05:17.471 SYMLINK libspdk_fsdev_aio.so 00:05:17.471 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:17.471 SO libspdk_accel_dsa.so.5.0 00:05:17.471 SYMLINK libspdk_accel_iaa.so 00:05:17.730 CC module/bdev/gpt/gpt.o 00:05:17.730 CC module/bdev/lvol/vbdev_lvol.o 00:05:17.730 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:17.730 CC module/bdev/error/vbdev_error_rpc.o 00:05:17.730 SYMLINK libspdk_sock_posix.so 00:05:17.730 SYMLINK libspdk_accel_dsa.so 00:05:17.730 CC module/bdev/gpt/vbdev_gpt.o 00:05:17.730 LIB libspdk_bdev_delay.a 00:05:17.730 LIB libspdk_blobfs_bdev.a 00:05:17.730 CC module/bdev/null/bdev_null.o 00:05:17.730 SO libspdk_bdev_delay.so.6.0 00:05:17.730 CC module/bdev/malloc/bdev_malloc.o 00:05:17.730 SO libspdk_blobfs_bdev.so.6.0 00:05:17.730 LIB libspdk_bdev_error.a 00:05:17.730 CC module/bdev/nvme/bdev_nvme.o 00:05:17.730 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:17.730 SO libspdk_bdev_error.so.6.0 00:05:17.989 SYMLINK libspdk_bdev_delay.so 00:05:17.989 CC module/bdev/nvme/nvme_rpc.o 00:05:17.989 SYMLINK libspdk_blobfs_bdev.so 00:05:17.989 CC module/bdev/null/bdev_null_rpc.o 00:05:17.989 SYMLINK libspdk_bdev_error.so 00:05:17.990 CC module/bdev/nvme/bdev_mdns_client.o 00:05:17.990 LIB libspdk_bdev_gpt.a 00:05:17.990 SO libspdk_bdev_gpt.so.6.0 00:05:17.990 SYMLINK libspdk_bdev_gpt.so 00:05:17.990 CC module/bdev/nvme/vbdev_opal.o 00:05:17.990 LIB libspdk_bdev_null.a 00:05:18.248 SO libspdk_bdev_null.so.6.0 00:05:18.248 LIB libspdk_bdev_lvol.a 00:05:18.248 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:18.248 SYMLINK libspdk_bdev_null.so 00:05:18.248 SO libspdk_bdev_lvol.so.6.0 00:05:18.248 CC module/bdev/passthru/vbdev_passthru.o 00:05:18.248 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:18.248 CC module/bdev/raid/bdev_raid.o 00:05:18.248 CC module/bdev/split/vbdev_split.o 00:05:18.248 SYMLINK libspdk_bdev_lvol.so 00:05:18.249 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:18.508 LIB libspdk_bdev_malloc.a 00:05:18.508 SO libspdk_bdev_malloc.so.6.0 00:05:18.508 CC module/bdev/xnvme/bdev_xnvme.o 00:05:18.508 CC module/bdev/split/vbdev_split_rpc.o 00:05:18.508 SYMLINK libspdk_bdev_malloc.so 00:05:18.508 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:18.508 CC module/bdev/aio/bdev_aio.o 00:05:18.508 CC module/bdev/ftl/bdev_ftl.o 00:05:18.508 LIB libspdk_bdev_passthru.a 00:05:18.508 SO libspdk_bdev_passthru.so.6.0 00:05:18.766 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:18.766 LIB libspdk_bdev_split.a 00:05:18.766 SYMLINK libspdk_bdev_passthru.so 00:05:18.766 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:18.766 SO libspdk_bdev_split.so.6.0 00:05:18.766 CC module/bdev/iscsi/bdev_iscsi.o 00:05:18.766 LIB libspdk_bdev_xnvme.a 00:05:18.766 SYMLINK libspdk_bdev_split.so 00:05:18.766 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:18.766 LIB libspdk_bdev_zone_block.a 00:05:18.766 SO libspdk_bdev_xnvme.so.3.0 00:05:18.766 SO libspdk_bdev_zone_block.so.6.0 00:05:18.766 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:18.766 SYMLINK libspdk_bdev_xnvme.so 00:05:19.025 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:19.025 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:19.025 CC module/bdev/aio/bdev_aio_rpc.o 00:05:19.025 SYMLINK libspdk_bdev_zone_block.so 00:05:19.025 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:19.025 CC module/bdev/raid/bdev_raid_rpc.o 00:05:19.025 CC module/bdev/raid/bdev_raid_sb.o 00:05:19.025 LIB libspdk_bdev_aio.a 00:05:19.025 SO libspdk_bdev_aio.so.6.0 00:05:19.025 LIB libspdk_bdev_ftl.a 00:05:19.025 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:19.284 SO libspdk_bdev_ftl.so.6.0 00:05:19.284 SYMLINK libspdk_bdev_aio.so 00:05:19.284 CC module/bdev/raid/raid0.o 00:05:19.284 LIB libspdk_bdev_iscsi.a 00:05:19.284 CC module/bdev/raid/raid1.o 00:05:19.284 SO libspdk_bdev_iscsi.so.6.0 00:05:19.284 CC module/bdev/raid/concat.o 00:05:19.284 SYMLINK libspdk_bdev_ftl.so 00:05:19.284 SYMLINK libspdk_bdev_iscsi.so 00:05:19.543 LIB libspdk_bdev_virtio.a 00:05:19.543 LIB libspdk_bdev_raid.a 00:05:19.543 SO libspdk_bdev_virtio.so.6.0 00:05:19.543 SYMLINK libspdk_bdev_virtio.so 00:05:19.543 SO libspdk_bdev_raid.so.6.0 00:05:19.801 SYMLINK libspdk_bdev_raid.so 00:05:20.737 LIB libspdk_bdev_nvme.a 00:05:20.737 SO libspdk_bdev_nvme.so.7.1 00:05:20.996 SYMLINK libspdk_bdev_nvme.so 00:05:21.564 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:21.564 CC module/event/subsystems/keyring/keyring.o 00:05:21.564 CC module/event/subsystems/vmd/vmd.o 00:05:21.564 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:21.564 CC module/event/subsystems/iobuf/iobuf.o 00:05:21.564 CC module/event/subsystems/scheduler/scheduler.o 00:05:21.564 CC module/event/subsystems/fsdev/fsdev.o 00:05:21.564 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:21.564 CC module/event/subsystems/sock/sock.o 00:05:21.822 LIB libspdk_event_vhost_blk.a 00:05:21.822 LIB libspdk_event_fsdev.a 00:05:21.822 LIB libspdk_event_scheduler.a 00:05:21.822 LIB libspdk_event_vmd.a 00:05:21.822 LIB libspdk_event_keyring.a 00:05:21.822 LIB libspdk_event_iobuf.a 00:05:21.822 LIB libspdk_event_sock.a 00:05:21.822 SO libspdk_event_fsdev.so.1.0 00:05:21.822 SO libspdk_event_vhost_blk.so.3.0 00:05:21.822 SO libspdk_event_scheduler.so.4.0 00:05:21.822 SO libspdk_event_sock.so.5.0 00:05:21.822 SO libspdk_event_iobuf.so.3.0 00:05:21.822 SO libspdk_event_vmd.so.6.0 00:05:21.822 SO libspdk_event_keyring.so.1.0 00:05:21.822 SYMLINK libspdk_event_scheduler.so 00:05:21.822 SYMLINK libspdk_event_vhost_blk.so 00:05:21.822 SYMLINK libspdk_event_fsdev.so 00:05:21.822 SYMLINK libspdk_event_sock.so 00:05:21.822 SYMLINK libspdk_event_keyring.so 00:05:21.822 SYMLINK libspdk_event_iobuf.so 00:05:21.822 SYMLINK libspdk_event_vmd.so 00:05:22.081 CC module/event/subsystems/accel/accel.o 00:05:22.339 LIB libspdk_event_accel.a 00:05:22.339 SO libspdk_event_accel.so.6.0 00:05:22.598 SYMLINK libspdk_event_accel.so 00:05:22.856 CC module/event/subsystems/bdev/bdev.o 00:05:23.115 LIB libspdk_event_bdev.a 00:05:23.115 SO libspdk_event_bdev.so.6.0 00:05:23.115 SYMLINK libspdk_event_bdev.so 00:05:23.682 CC module/event/subsystems/ublk/ublk.o 00:05:23.682 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:23.682 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:23.682 CC module/event/subsystems/scsi/scsi.o 00:05:23.682 CC module/event/subsystems/nbd/nbd.o 00:05:23.682 LIB libspdk_event_ublk.a 00:05:23.682 LIB libspdk_event_nbd.a 00:05:23.682 LIB libspdk_event_scsi.a 00:05:23.682 SO libspdk_event_ublk.so.3.0 00:05:23.682 SO libspdk_event_scsi.so.6.0 00:05:23.682 SO libspdk_event_nbd.so.6.0 00:05:23.682 LIB libspdk_event_nvmf.a 00:05:23.682 SYMLINK libspdk_event_ublk.so 00:05:23.682 SYMLINK libspdk_event_nbd.so 00:05:23.941 SYMLINK libspdk_event_scsi.so 00:05:23.941 SO libspdk_event_nvmf.so.6.0 00:05:23.941 SYMLINK libspdk_event_nvmf.so 00:05:24.199 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:24.200 CC module/event/subsystems/iscsi/iscsi.o 00:05:24.200 LIB libspdk_event_vhost_scsi.a 00:05:24.458 LIB libspdk_event_iscsi.a 00:05:24.458 SO libspdk_event_vhost_scsi.so.3.0 00:05:24.458 SO libspdk_event_iscsi.so.6.0 00:05:24.458 SYMLINK libspdk_event_vhost_scsi.so 00:05:24.458 SYMLINK libspdk_event_iscsi.so 00:05:24.718 SO libspdk.so.6.0 00:05:24.718 SYMLINK libspdk.so 00:05:24.976 CXX app/trace/trace.o 00:05:24.976 CC app/spdk_nvme_identify/identify.o 00:05:24.976 CC app/trace_record/trace_record.o 00:05:24.976 CC app/spdk_lspci/spdk_lspci.o 00:05:24.976 CC app/spdk_nvme_perf/perf.o 00:05:24.976 CC app/iscsi_tgt/iscsi_tgt.o 00:05:24.976 CC app/nvmf_tgt/nvmf_main.o 00:05:24.977 CC app/spdk_tgt/spdk_tgt.o 00:05:24.977 CC examples/util/zipf/zipf.o 00:05:24.977 CC test/thread/poller_perf/poller_perf.o 00:05:25.235 LINK spdk_lspci 00:05:25.235 LINK nvmf_tgt 00:05:25.235 LINK iscsi_tgt 00:05:25.235 LINK spdk_tgt 00:05:25.235 LINK poller_perf 00:05:25.235 LINK spdk_trace_record 00:05:25.235 LINK zipf 00:05:25.495 CC app/spdk_nvme_discover/discovery_aer.o 00:05:25.495 LINK spdk_trace 00:05:25.495 CC app/spdk_top/spdk_top.o 00:05:25.495 CC app/spdk_dd/spdk_dd.o 00:05:25.495 CC examples/vmd/lsvmd/lsvmd.o 00:05:25.495 CC examples/ioat/perf/perf.o 00:05:25.495 CC test/dma/test_dma/test_dma.o 00:05:25.495 LINK spdk_nvme_discover 00:05:25.754 CC test/app/bdev_svc/bdev_svc.o 00:05:25.754 LINK lsvmd 00:05:25.754 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:25.754 LINK ioat_perf 00:05:25.754 LINK bdev_svc 00:05:25.754 CC examples/vmd/led/led.o 00:05:26.013 LINK spdk_nvme_perf 00:05:26.013 LINK spdk_nvme_identify 00:05:26.013 LINK spdk_dd 00:05:26.013 TEST_HEADER include/spdk/accel.h 00:05:26.013 TEST_HEADER include/spdk/accel_module.h 00:05:26.013 TEST_HEADER include/spdk/assert.h 00:05:26.013 TEST_HEADER include/spdk/barrier.h 00:05:26.013 TEST_HEADER include/spdk/base64.h 00:05:26.013 TEST_HEADER include/spdk/bdev.h 00:05:26.013 TEST_HEADER include/spdk/bdev_module.h 00:05:26.013 TEST_HEADER include/spdk/bdev_zone.h 00:05:26.013 TEST_HEADER include/spdk/bit_array.h 00:05:26.013 TEST_HEADER include/spdk/bit_pool.h 00:05:26.013 TEST_HEADER include/spdk/blob_bdev.h 00:05:26.013 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:26.013 TEST_HEADER include/spdk/blobfs.h 00:05:26.013 TEST_HEADER include/spdk/blob.h 00:05:26.014 TEST_HEADER include/spdk/conf.h 00:05:26.014 TEST_HEADER include/spdk/config.h 00:05:26.014 TEST_HEADER include/spdk/cpuset.h 00:05:26.014 TEST_HEADER include/spdk/crc16.h 00:05:26.014 TEST_HEADER include/spdk/crc32.h 00:05:26.014 TEST_HEADER include/spdk/crc64.h 00:05:26.014 TEST_HEADER include/spdk/dif.h 00:05:26.014 TEST_HEADER include/spdk/dma.h 00:05:26.014 TEST_HEADER include/spdk/endian.h 00:05:26.014 TEST_HEADER include/spdk/env_dpdk.h 00:05:26.014 TEST_HEADER include/spdk/env.h 00:05:26.014 TEST_HEADER include/spdk/event.h 00:05:26.014 TEST_HEADER include/spdk/fd_group.h 00:05:26.014 LINK led 00:05:26.014 TEST_HEADER include/spdk/fd.h 00:05:26.014 TEST_HEADER include/spdk/file.h 00:05:26.014 TEST_HEADER include/spdk/fsdev.h 00:05:26.014 TEST_HEADER include/spdk/fsdev_module.h 00:05:26.014 CC examples/ioat/verify/verify.o 00:05:26.014 TEST_HEADER include/spdk/ftl.h 00:05:26.014 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:26.014 TEST_HEADER include/spdk/gpt_spec.h 00:05:26.014 TEST_HEADER include/spdk/hexlify.h 00:05:26.014 TEST_HEADER include/spdk/histogram_data.h 00:05:26.014 TEST_HEADER include/spdk/idxd.h 00:05:26.014 TEST_HEADER include/spdk/idxd_spec.h 00:05:26.014 TEST_HEADER include/spdk/init.h 00:05:26.014 TEST_HEADER include/spdk/ioat.h 00:05:26.014 TEST_HEADER include/spdk/ioat_spec.h 00:05:26.014 TEST_HEADER include/spdk/iscsi_spec.h 00:05:26.014 TEST_HEADER include/spdk/json.h 00:05:26.014 TEST_HEADER include/spdk/jsonrpc.h 00:05:26.014 TEST_HEADER include/spdk/keyring.h 00:05:26.014 TEST_HEADER include/spdk/keyring_module.h 00:05:26.014 TEST_HEADER include/spdk/likely.h 00:05:26.014 TEST_HEADER include/spdk/log.h 00:05:26.014 TEST_HEADER include/spdk/lvol.h 00:05:26.014 TEST_HEADER include/spdk/md5.h 00:05:26.014 TEST_HEADER include/spdk/memory.h 00:05:26.014 TEST_HEADER include/spdk/mmio.h 00:05:26.014 TEST_HEADER include/spdk/nbd.h 00:05:26.014 TEST_HEADER include/spdk/net.h 00:05:26.014 TEST_HEADER include/spdk/notify.h 00:05:26.014 TEST_HEADER include/spdk/nvme.h 00:05:26.014 TEST_HEADER include/spdk/nvme_intel.h 00:05:26.014 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:26.014 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:26.014 TEST_HEADER include/spdk/nvme_spec.h 00:05:26.014 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:26.014 TEST_HEADER include/spdk/nvme_zns.h 00:05:26.014 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:26.014 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:26.014 TEST_HEADER include/spdk/nvmf.h 00:05:26.014 TEST_HEADER include/spdk/nvmf_spec.h 00:05:26.014 TEST_HEADER include/spdk/nvmf_transport.h 00:05:26.014 TEST_HEADER include/spdk/opal.h 00:05:26.014 TEST_HEADER include/spdk/opal_spec.h 00:05:26.014 TEST_HEADER include/spdk/pci_ids.h 00:05:26.014 TEST_HEADER include/spdk/pipe.h 00:05:26.014 TEST_HEADER include/spdk/queue.h 00:05:26.014 LINK test_dma 00:05:26.014 TEST_HEADER include/spdk/reduce.h 00:05:26.014 TEST_HEADER include/spdk/rpc.h 00:05:26.014 TEST_HEADER include/spdk/scheduler.h 00:05:26.014 TEST_HEADER include/spdk/scsi.h 00:05:26.014 TEST_HEADER include/spdk/scsi_spec.h 00:05:26.014 TEST_HEADER include/spdk/sock.h 00:05:26.014 TEST_HEADER include/spdk/stdinc.h 00:05:26.014 TEST_HEADER include/spdk/string.h 00:05:26.014 TEST_HEADER include/spdk/thread.h 00:05:26.014 TEST_HEADER include/spdk/trace.h 00:05:26.014 TEST_HEADER include/spdk/trace_parser.h 00:05:26.014 TEST_HEADER include/spdk/tree.h 00:05:26.014 TEST_HEADER include/spdk/ublk.h 00:05:26.014 TEST_HEADER include/spdk/util.h 00:05:26.014 TEST_HEADER include/spdk/uuid.h 00:05:26.014 TEST_HEADER include/spdk/version.h 00:05:26.014 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:26.014 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:26.014 TEST_HEADER include/spdk/vhost.h 00:05:26.014 TEST_HEADER include/spdk/vmd.h 00:05:26.014 TEST_HEADER include/spdk/xor.h 00:05:26.014 TEST_HEADER include/spdk/zipf.h 00:05:26.273 CXX test/cpp_headers/accel.o 00:05:26.273 LINK nvme_fuzz 00:05:26.273 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:26.273 LINK verify 00:05:26.273 CC examples/idxd/perf/perf.o 00:05:26.273 CXX test/cpp_headers/accel_module.o 00:05:26.273 CC examples/sock/hello_world/hello_sock.o 00:05:26.273 CC examples/thread/thread/thread_ex.o 00:05:26.273 LINK interrupt_tgt 00:05:26.273 CXX test/cpp_headers/assert.o 00:05:26.273 CC test/app/histogram_perf/histogram_perf.o 00:05:26.531 CC test/app/jsoncat/jsoncat.o 00:05:26.531 LINK spdk_top 00:05:26.531 CXX test/cpp_headers/barrier.o 00:05:26.531 LINK histogram_perf 00:05:26.531 LINK jsoncat 00:05:26.531 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:26.531 LINK idxd_perf 00:05:26.531 LINK thread 00:05:26.531 LINK hello_sock 00:05:26.790 CXX test/cpp_headers/base64.o 00:05:26.790 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:26.790 CC test/app/stub/stub.o 00:05:26.790 CC test/env/mem_callbacks/mem_callbacks.o 00:05:26.790 CC app/fio/nvme/fio_plugin.o 00:05:26.790 CXX test/cpp_headers/bdev.o 00:05:26.790 CC app/vhost/vhost.o 00:05:26.790 CC test/event/event_perf/event_perf.o 00:05:26.790 CC test/nvme/aer/aer.o 00:05:27.050 LINK stub 00:05:27.050 CC examples/nvme/hello_world/hello_world.o 00:05:27.050 LINK event_perf 00:05:27.050 CXX test/cpp_headers/bdev_module.o 00:05:27.050 LINK vhost 00:05:27.050 LINK vhost_fuzz 00:05:27.050 LINK hello_world 00:05:27.050 CC test/nvme/reset/reset.o 00:05:27.308 CXX test/cpp_headers/bdev_zone.o 00:05:27.308 LINK aer 00:05:27.308 CC test/event/reactor/reactor.o 00:05:27.308 CXX test/cpp_headers/bit_array.o 00:05:27.309 LINK mem_callbacks 00:05:27.309 CC test/nvme/sgl/sgl.o 00:05:27.309 LINK reactor 00:05:27.309 CXX test/cpp_headers/bit_pool.o 00:05:27.309 LINK spdk_nvme 00:05:27.309 CC examples/nvme/reconnect/reconnect.o 00:05:27.567 CC test/env/vtophys/vtophys.o 00:05:27.567 LINK reset 00:05:27.567 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:27.567 CC app/fio/bdev/fio_plugin.o 00:05:27.567 CXX test/cpp_headers/blob_bdev.o 00:05:27.567 CC test/event/reactor_perf/reactor_perf.o 00:05:27.567 LINK vtophys 00:05:27.567 LINK env_dpdk_post_init 00:05:27.567 CC test/event/app_repeat/app_repeat.o 00:05:27.567 LINK sgl 00:05:27.826 CC test/event/scheduler/scheduler.o 00:05:27.826 LINK reactor_perf 00:05:27.826 CXX test/cpp_headers/blobfs_bdev.o 00:05:27.826 LINK reconnect 00:05:27.826 LINK app_repeat 00:05:27.826 CC test/nvme/e2edp/nvme_dp.o 00:05:27.826 CC test/nvme/overhead/overhead.o 00:05:27.826 CC test/env/memory/memory_ut.o 00:05:27.826 CXX test/cpp_headers/blobfs.o 00:05:27.826 CC test/nvme/err_injection/err_injection.o 00:05:27.826 LINK iscsi_fuzz 00:05:27.826 LINK scheduler 00:05:28.085 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:28.085 CC examples/nvme/arbitration/arbitration.o 00:05:28.085 LINK spdk_bdev 00:05:28.085 CXX test/cpp_headers/blob.o 00:05:28.085 LINK nvme_dp 00:05:28.085 LINK err_injection 00:05:28.085 CXX test/cpp_headers/conf.o 00:05:28.085 CXX test/cpp_headers/config.o 00:05:28.085 LINK overhead 00:05:28.085 CXX test/cpp_headers/cpuset.o 00:05:28.343 CXX test/cpp_headers/crc16.o 00:05:28.343 CC test/nvme/startup/startup.o 00:05:28.343 LINK arbitration 00:05:28.343 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:28.343 CC examples/nvme/hotplug/hotplug.o 00:05:28.343 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:28.343 CC examples/nvme/abort/abort.o 00:05:28.343 CC test/env/pci/pci_ut.o 00:05:28.601 CXX test/cpp_headers/crc32.o 00:05:28.601 LINK nvme_manage 00:05:28.601 LINK startup 00:05:28.601 LINK cmb_copy 00:05:28.601 LINK hotplug 00:05:28.601 LINK hello_fsdev 00:05:28.601 CXX test/cpp_headers/crc64.o 00:05:28.601 CC examples/accel/perf/accel_perf.o 00:05:28.860 LINK abort 00:05:28.860 CC test/nvme/reserve/reserve.o 00:05:28.860 CXX test/cpp_headers/dif.o 00:05:28.860 CC test/nvme/simple_copy/simple_copy.o 00:05:28.860 LINK pci_ut 00:05:28.860 CC examples/blob/cli/blobcli.o 00:05:28.860 CC examples/blob/hello_world/hello_blob.o 00:05:28.860 CC test/nvme/connect_stress/connect_stress.o 00:05:28.860 CXX test/cpp_headers/dma.o 00:05:29.120 LINK memory_ut 00:05:29.120 LINK reserve 00:05:29.120 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:29.120 LINK simple_copy 00:05:29.120 LINK connect_stress 00:05:29.120 CXX test/cpp_headers/endian.o 00:05:29.120 LINK hello_blob 00:05:29.120 CXX test/cpp_headers/env_dpdk.o 00:05:29.120 LINK accel_perf 00:05:29.120 CC test/nvme/boot_partition/boot_partition.o 00:05:29.120 LINK pmr_persistence 00:05:29.378 CXX test/cpp_headers/env.o 00:05:29.378 CC test/rpc_client/rpc_client_test.o 00:05:29.379 CC test/nvme/compliance/nvme_compliance.o 00:05:29.379 LINK boot_partition 00:05:29.379 CXX test/cpp_headers/event.o 00:05:29.379 LINK blobcli 00:05:29.379 CC test/nvme/fused_ordering/fused_ordering.o 00:05:29.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:29.379 LINK rpc_client_test 00:05:29.637 CC test/nvme/fdp/fdp.o 00:05:29.637 CC test/accel/dif/dif.o 00:05:29.637 CXX test/cpp_headers/fd_group.o 00:05:29.637 CC test/blobfs/mkfs/mkfs.o 00:05:29.637 CC test/nvme/cuse/cuse.o 00:05:29.637 CXX test/cpp_headers/fd.o 00:05:29.637 LINK fused_ordering 00:05:29.637 LINK doorbell_aers 00:05:29.637 LINK nvme_compliance 00:05:29.895 LINK mkfs 00:05:29.895 CC examples/bdev/hello_world/hello_bdev.o 00:05:29.895 CXX test/cpp_headers/file.o 00:05:29.895 CXX test/cpp_headers/fsdev.o 00:05:29.895 CC examples/bdev/bdevperf/bdevperf.o 00:05:29.895 CXX test/cpp_headers/fsdev_module.o 00:05:29.895 LINK fdp 00:05:29.895 CXX test/cpp_headers/ftl.o 00:05:29.895 CXX test/cpp_headers/fuse_dispatcher.o 00:05:29.895 CXX test/cpp_headers/gpt_spec.o 00:05:30.168 LINK hello_bdev 00:05:30.168 CXX test/cpp_headers/hexlify.o 00:05:30.168 CXX test/cpp_headers/histogram_data.o 00:05:30.168 CC test/lvol/esnap/esnap.o 00:05:30.168 CXX test/cpp_headers/idxd.o 00:05:30.168 CXX test/cpp_headers/idxd_spec.o 00:05:30.168 CXX test/cpp_headers/init.o 00:05:30.168 CXX test/cpp_headers/ioat.o 00:05:30.168 CXX test/cpp_headers/ioat_spec.o 00:05:30.168 CXX test/cpp_headers/iscsi_spec.o 00:05:30.470 LINK dif 00:05:30.470 CXX test/cpp_headers/json.o 00:05:30.470 CXX test/cpp_headers/jsonrpc.o 00:05:30.470 CXX test/cpp_headers/keyring.o 00:05:30.470 CXX test/cpp_headers/keyring_module.o 00:05:30.470 CXX test/cpp_headers/likely.o 00:05:30.470 CXX test/cpp_headers/log.o 00:05:30.470 CXX test/cpp_headers/lvol.o 00:05:30.470 CXX test/cpp_headers/md5.o 00:05:30.470 CXX test/cpp_headers/memory.o 00:05:30.470 CXX test/cpp_headers/mmio.o 00:05:30.470 CXX test/cpp_headers/nbd.o 00:05:30.470 CXX test/cpp_headers/net.o 00:05:30.470 CXX test/cpp_headers/notify.o 00:05:30.470 CXX test/cpp_headers/nvme.o 00:05:30.728 CXX test/cpp_headers/nvme_intel.o 00:05:30.728 CXX test/cpp_headers/nvme_ocssd.o 00:05:30.728 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:30.728 CXX test/cpp_headers/nvme_spec.o 00:05:30.728 CXX test/cpp_headers/nvme_zns.o 00:05:30.728 LINK bdevperf 00:05:30.728 CC test/bdev/bdevio/bdevio.o 00:05:30.728 CXX test/cpp_headers/nvmf_cmd.o 00:05:30.728 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:30.728 CXX test/cpp_headers/nvmf.o 00:05:30.988 CXX test/cpp_headers/nvmf_spec.o 00:05:30.988 CXX test/cpp_headers/nvmf_transport.o 00:05:30.988 CXX test/cpp_headers/opal.o 00:05:30.988 LINK cuse 00:05:30.988 CXX test/cpp_headers/opal_spec.o 00:05:30.988 CXX test/cpp_headers/pci_ids.o 00:05:30.988 CXX test/cpp_headers/pipe.o 00:05:30.988 CXX test/cpp_headers/queue.o 00:05:30.988 CXX test/cpp_headers/reduce.o 00:05:30.988 CXX test/cpp_headers/rpc.o 00:05:30.988 CXX test/cpp_headers/scheduler.o 00:05:31.247 CC examples/nvmf/nvmf/nvmf.o 00:05:31.247 CXX test/cpp_headers/scsi.o 00:05:31.247 CXX test/cpp_headers/scsi_spec.o 00:05:31.247 LINK bdevio 00:05:31.247 CXX test/cpp_headers/sock.o 00:05:31.247 CXX test/cpp_headers/stdinc.o 00:05:31.247 CXX test/cpp_headers/string.o 00:05:31.247 CXX test/cpp_headers/thread.o 00:05:31.247 CXX test/cpp_headers/trace.o 00:05:31.247 CXX test/cpp_headers/trace_parser.o 00:05:31.247 CXX test/cpp_headers/tree.o 00:05:31.247 CXX test/cpp_headers/ublk.o 00:05:31.247 CXX test/cpp_headers/util.o 00:05:31.247 CXX test/cpp_headers/uuid.o 00:05:31.247 CXX test/cpp_headers/version.o 00:05:31.247 CXX test/cpp_headers/vfio_user_pci.o 00:05:31.505 CXX test/cpp_headers/vfio_user_spec.o 00:05:31.505 CXX test/cpp_headers/vhost.o 00:05:31.505 CXX test/cpp_headers/vmd.o 00:05:31.505 LINK nvmf 00:05:31.505 CXX test/cpp_headers/xor.o 00:05:31.505 CXX test/cpp_headers/zipf.o 00:05:36.775 LINK esnap 00:05:36.775 00:05:36.775 real 1m27.552s 00:05:36.775 user 7m33.792s 00:05:36.775 sys 1m58.284s 00:05:36.775 08:18:23 make -- common/autotest_common.sh@1133 -- $ xtrace_disable 00:05:36.775 08:18:23 make -- common/autotest_common.sh@10 -- $ set +x 00:05:36.775 ************************************ 00:05:36.775 END TEST make 00:05:36.775 ************************************ 00:05:36.775 08:18:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:36.775 08:18:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:36.775 08:18:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:36.775 08:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.775 08:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:36.775 08:18:23 -- pm/common@44 -- $ pid=5315 00:05:36.775 08:18:23 -- pm/common@50 -- $ kill -TERM 5315 00:05:36.775 08:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.775 08:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:36.775 08:18:23 -- pm/common@44 -- $ pid=5317 00:05:36.775 08:18:23 -- pm/common@50 -- $ kill -TERM 5317 00:05:36.775 08:18:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:36.775 08:18:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:36.775 08:18:23 -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:36.775 08:18:23 -- common/autotest_common.sh@1638 -- # lcov --version 00:05:36.775 08:18:23 -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:36.775 08:18:23 -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:36.775 08:18:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.775 08:18:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.775 08:18:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.775 08:18:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.775 08:18:23 -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.775 08:18:23 -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.775 08:18:23 -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.775 08:18:23 -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.775 08:18:23 -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.775 08:18:23 -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.775 08:18:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.775 08:18:23 -- scripts/common.sh@344 -- # case "$op" in 00:05:36.775 08:18:23 -- scripts/common.sh@345 -- # : 1 00:05:36.775 08:18:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.775 08:18:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.775 08:18:23 -- scripts/common.sh@365 -- # decimal 1 00:05:36.775 08:18:23 -- scripts/common.sh@353 -- # local d=1 00:05:36.775 08:18:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.775 08:18:23 -- scripts/common.sh@355 -- # echo 1 00:05:36.775 08:18:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.775 08:18:24 -- scripts/common.sh@366 -- # decimal 2 00:05:36.775 08:18:24 -- scripts/common.sh@353 -- # local d=2 00:05:36.775 08:18:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.775 08:18:24 -- scripts/common.sh@355 -- # echo 2 00:05:36.775 08:18:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.775 08:18:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.775 08:18:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.775 08:18:24 -- scripts/common.sh@368 -- # return 0 00:05:36.775 08:18:24 -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.775 08:18:24 -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:36.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.775 --rc genhtml_branch_coverage=1 00:05:36.775 --rc genhtml_function_coverage=1 00:05:36.775 --rc genhtml_legend=1 00:05:36.775 --rc geninfo_all_blocks=1 00:05:36.775 --rc geninfo_unexecuted_blocks=1 00:05:36.775 00:05:36.775 ' 00:05:36.775 08:18:24 -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:36.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.775 --rc genhtml_branch_coverage=1 00:05:36.775 --rc genhtml_function_coverage=1 00:05:36.775 --rc genhtml_legend=1 00:05:36.775 --rc geninfo_all_blocks=1 00:05:36.775 --rc geninfo_unexecuted_blocks=1 00:05:36.775 00:05:36.775 ' 00:05:36.775 08:18:24 -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:36.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.775 --rc genhtml_branch_coverage=1 00:05:36.775 --rc genhtml_function_coverage=1 00:05:36.775 --rc genhtml_legend=1 00:05:36.775 --rc geninfo_all_blocks=1 00:05:36.775 --rc geninfo_unexecuted_blocks=1 00:05:36.775 00:05:36.775 ' 00:05:36.775 08:18:24 -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:36.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.775 --rc genhtml_branch_coverage=1 00:05:36.776 --rc genhtml_function_coverage=1 00:05:36.776 --rc genhtml_legend=1 00:05:36.776 --rc geninfo_all_blocks=1 00:05:36.776 --rc geninfo_unexecuted_blocks=1 00:05:36.776 00:05:36.776 ' 00:05:36.776 08:18:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.776 08:18:24 -- nvmf/common.sh@7 -- # uname -s 00:05:36.776 08:18:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.776 08:18:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.776 08:18:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.776 08:18:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.776 08:18:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.776 08:18:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.776 08:18:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.776 08:18:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.776 08:18:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.776 08:18:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.776 08:18:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:05:36.776 08:18:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:05:36.776 08:18:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.776 08:18:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.776 08:18:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.776 08:18:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.776 08:18:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.776 08:18:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.776 08:18:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.776 08:18:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.776 08:18:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.776 08:18:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.776 08:18:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.776 08:18:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.776 08:18:24 -- paths/export.sh@5 -- # export PATH 00:05:36.776 08:18:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.776 08:18:24 -- nvmf/common.sh@51 -- # : 0 00:05:36.776 08:18:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.776 08:18:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.776 08:18:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.776 08:18:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.776 08:18:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.776 08:18:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.776 08:18:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.776 08:18:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.776 08:18:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.776 08:18:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:36.776 08:18:24 -- spdk/autotest.sh@32 -- # uname -s 00:05:36.776 08:18:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:36.776 08:18:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:36.776 08:18:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:36.776 08:18:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:36.776 08:18:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:36.776 08:18:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:36.776 08:18:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:36.776 08:18:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:36.776 08:18:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54800 00:05:36.776 08:18:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:36.776 08:18:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:36.776 08:18:24 -- pm/common@17 -- # local monitor 00:05:36.776 08:18:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.776 08:18:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.776 08:18:24 -- pm/common@21 -- # date +%s 00:05:36.776 08:18:24 -- pm/common@25 -- # sleep 1 00:05:36.776 08:18:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732090704 00:05:36.776 08:18:24 -- pm/common@21 -- # date +%s 00:05:36.776 08:18:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732090704 00:05:36.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732090704_collect-cpu-load.pm.log 00:05:36.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732090704_collect-vmstat.pm.log 00:05:37.711 08:18:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:37.711 08:18:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:37.711 08:18:25 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:37.711 08:18:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.711 08:18:25 -- spdk/autotest.sh@59 -- # create_test_list 00:05:37.711 08:18:25 -- common/autotest_common.sh@755 -- # xtrace_disable 00:05:37.711 08:18:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.711 08:18:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:37.711 08:18:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:37.711 08:18:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:37.711 08:18:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:37.711 08:18:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:37.711 08:18:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:37.711 08:18:25 -- common/autotest_common.sh@1445 -- # uname 00:05:37.711 08:18:25 -- common/autotest_common.sh@1445 -- # '[' Linux = FreeBSD ']' 00:05:37.711 08:18:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:37.711 08:18:25 -- common/autotest_common.sh@1465 -- # uname 00:05:37.711 08:18:25 -- common/autotest_common.sh@1465 -- # [[ Linux = FreeBSD ]] 00:05:37.711 08:18:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:37.711 08:18:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:37.969 lcov: LCOV version 1.15 00:05:37.970 08:18:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:52.905 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:52.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:10.990 08:18:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:10.990 08:18:55 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:10.990 08:18:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.990 08:18:55 -- spdk/autotest.sh@78 -- # rm -f 00:06:10.990 08:18:55 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.990 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:10.990 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:10.990 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:10.990 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:10.990 08:18:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:10.990 08:18:57 -- common/autotest_common.sh@1602 -- # zoned_devs=() 00:06:10.990 08:18:57 -- common/autotest_common.sh@1602 -- # local -gA zoned_devs 00:06:10.990 08:18:57 -- common/autotest_common.sh@1603 -- # local nvme bdf 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme0n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme2n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n2 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme2n2 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n3 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme2n3 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3c3n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme3c3n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:06:10.990 08:18:57 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1595 -- # local device=nvme3n1 00:06:10.990 08:18:57 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:10.990 08:18:57 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184177 s, 56.9 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434938 s, 241 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611957 s, 171 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591574 s, 177 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372832 s, 281 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.990 08:18:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.990 08:18:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:10.990 08:18:57 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:10.990 08:18:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:10.990 No valid GPT data, bailing 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:10.990 08:18:57 -- scripts/common.sh@394 -- # pt= 00:06:10.990 08:18:57 -- scripts/common.sh@395 -- # return 1 00:06:10.990 08:18:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:10.990 1+0 records in 00:06:10.990 1+0 records out 00:06:10.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618395 s, 170 MB/s 00:06:10.990 08:18:57 -- spdk/autotest.sh@105 -- # sync 00:06:10.990 08:18:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:10.990 08:18:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:10.990 08:18:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.526 08:19:01 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.784 08:19:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.784 08:19:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.784 08:19:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.918 Hugepages 00:06:14.918 node hugesize free / total 00:06:14.918 node0 1048576kB 0 / 0 00:06:14.918 node0 2048kB 0 / 0 00:06:14.918 00:06:14.918 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.918 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:15.178 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:15.178 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:15.437 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:15.437 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:15.437 08:19:02 -- spdk/autotest.sh@117 -- # uname -s 00:06:15.437 08:19:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:15.437 08:19:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:15.437 08:19:02 -- nvme/functions.sh@217 -- # scan_nvme_ctrls 00:06:15.437 08:19:02 -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:06:15.437 08:19:02 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:06:15.438 08:19:02 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@51 -- # pci=0000:00:10.0 00:06:15.438 08:19:02 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:10.0 00:06:15.438 08:19:02 -- scripts/common.sh@18 -- # local i 00:06:15.438 08:19:02 -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:15.438 08:19:02 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:15.438 08:19:02 -- scripts/common.sh@27 -- # return 0 00:06:15.438 08:19:02 -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:06:15.438 08:19:02 -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:06:15.438 08:19:02 -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@20 -- # shift 00:06:15.438 08:19:02 -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme0 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x1b36"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[vid]=0x1b36 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x1af4"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x1af4 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 12340 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="12340 "' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[sn]='12340 ' 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="8.0.0 "' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[fr]='8.0.0 ' 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="6"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[rab]=6 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="525400"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[ieee]=525400 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="7"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[mdts]=7 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[cntlid]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x100"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[oaes]=0x100 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x8000"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x8000 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[nvmsr]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="0"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[mec]=0 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x12a"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[oacs]=0x12a 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.438 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.438 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="3"' 00:06:15.438 08:19:02 -- nvme/functions.sh@25 -- # nvme0[acl]=3 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.439 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.439 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x3"' 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # nvme0[frmw]=0x3 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.439 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0x7"' 00:06:15.439 08:19:02 -- nvme/functions.sh@25 -- # nvme0[lpa]=0x7 00:06:15.439 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.703 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="0"' 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # nvme0[elpe]=0 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.703 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="0"' 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # nvme0[npss]=0 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.703 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0"' 00:06:15.703 08:19:02 -- nvme/functions.sh@25 -- # nvme0[avscc]=0 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.703 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.703 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="343"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[wctemp]=343 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="373"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[cctemp]=373 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="0"' 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # nvme0[edstt]=0 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:02 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:02 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:02 -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[dsto]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[sanicap]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[maxcmd]=0 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="256"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[nn]=256 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x15d"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[oncs]=0x15d 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.704 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.704 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:06:15.704 08:19:03 -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[fna]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x7"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[vwc]=0x7 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[awun]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[awupf]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[icsvscc]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0x3"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[ocfs]=0x3 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0x1"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[sgls]=0x1 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[oaqd]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[oaqd]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="-"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0[active_power_workload]=- 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:06:15.705 08:19:03 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:06:15.705 08:19:03 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:06:15.705 08:19:03 -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:06:15.705 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.705 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme0n1 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0x17a17a"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0x17a17a 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0x17a17a"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0x17a17a 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x17a17a"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x17a17a 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x14 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="7"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=7 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0x7"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0x7 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.705 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.705 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0x3"' 00:06:15.705 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[mc]=0x3 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0x1f"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0x1f 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="1"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=1 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nawun]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[npwg]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[npwa]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[npdg]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[npda]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nows]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="128"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=128 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="128"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[mcl]=128 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="127"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[msrc]=127 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:06:15.706 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.706 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.706 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:06:15.707 08:19:03 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:06:15.707 08:19:03 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:06:15.707 08:19:03 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:10.0 00:06:15.707 08:19:03 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:06:15.707 08:19:03 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:06:15.707 08:19:03 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@51 -- # pci=0000:00:11.0 00:06:15.707 08:19:03 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:11.0 00:06:15.707 08:19:03 -- scripts/common.sh@18 -- # local i 00:06:15.707 08:19:03 -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:15.707 08:19:03 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:15.707 08:19:03 -- scripts/common.sh@27 -- # return 0 00:06:15.707 08:19:03 -- nvme/functions.sh@53 -- # ctrl_dev=nvme1 00:06:15.707 08:19:03 -- nvme/functions.sh@54 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:06:15.707 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme1 reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.707 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme1=()' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme1 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[vid]="0x1b36"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[vid]=0x1b36 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ssvid]="0x1af4"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ssvid]=0x1af4 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 12341 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[sn]="12341 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[sn]='12341 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fr]="8.0.0 "' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fr]='8.0.0 ' 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rab]="6"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rab]=6 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ieee]="525400"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ieee]=525400 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[cmic]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[cmic]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mdts]="7"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mdts]=7 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[cntlid]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[cntlid]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ver]="0x10400"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ver]=0x10400 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3r]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rtd3r]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3e]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rtd3e]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[oaes]="0x100"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[oaes]=0x100 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ctratt]="0x8000"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ctratt]=0x8000 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rrls]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rrls]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[cntrltype]="1"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[cntrltype]=1 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt1]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[crdt1]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt2]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[crdt2]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt3]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[crdt3]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[nvmsr]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[nvmsr]=0 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.707 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.707 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[vwci]="0"' 00:06:15.707 08:19:03 -- nvme/functions.sh@25 -- # nvme1[vwci]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mec]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mec]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[oacs]="0x12a"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[oacs]=0x12a 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[acl]="3"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[acl]=3 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[aerl]="3"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[aerl]=3 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[frmw]="0x3"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[frmw]=0x3 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[lpa]="0x7"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[lpa]=0x7 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[elpe]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[elpe]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[npss]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[npss]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[avscc]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[avscc]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[apsta]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[apsta]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[wctemp]="343"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[wctemp]=343 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[cctemp]="373"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[cctemp]=373 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mtfa]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mtfa]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[hmpre]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[hmpre]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmin]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[hmmin]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[tnvmcap]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[tnvmcap]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[unvmcap]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[unvmcap]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rpmbs]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rpmbs]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[edstt]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[edstt]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[dsto]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[dsto]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fwug]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fwug]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[kas]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[kas]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[hctma]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[hctma]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mntmt]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mntmt]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mxtmt]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mxtmt]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[sanicap]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[sanicap]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[hmminds]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[hmminds]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmaxd]="0"' 00:06:15.708 08:19:03 -- nvme/functions.sh@25 -- # nvme1[hmmaxd]=0 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.708 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.708 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[nsetidmax]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[nsetidmax]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[endgidmax]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[endgidmax]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[anatt]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[anatt]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[anacap]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[anacap]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[anagrpmax]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[anagrpmax]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[nanagrpid]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[nanagrpid]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[pels]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[pels]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[domainid]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[domainid]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[megcap]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[megcap]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[sqes]="0x66"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[sqes]=0x66 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[cqes]="0x44"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[cqes]=0x44 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcmd]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[maxcmd]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[nn]="256"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[nn]=256 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[oncs]="0x15d"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[oncs]=0x15d 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fuses]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fuses]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fna]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fna]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[vwc]="0x7"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[vwc]=0x7 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[awun]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[awun]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[awupf]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[awupf]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[icsvscc]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[icsvscc]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[nwpc]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[nwpc]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[acwu]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[acwu]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ocfs]="0x3"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ocfs]=0x3 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[sgls]="0x1"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[sgls]=0x1 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[mnan]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[mnan]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[maxdna]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[maxdna]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcna]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[maxcna]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[oaqd]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[oaqd]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12341"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12341 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ioccsz]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ioccsz]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[iorcsz]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[iorcsz]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[icdoff]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[icdoff]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[fcatt]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[fcatt]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.709 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[msdbd]="0"' 00:06:15.709 08:19:03 -- nvme/functions.sh@25 -- # nvme1[msdbd]=0 00:06:15.709 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ofcs]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ofcs]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1[active_power_workload]="-"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1[active_power_workload]=- 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme1_ns 00:06:15.710 08:19:03 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:06:15.710 08:19:03 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@58 -- # ns_dev=nvme1n1 00:06:15.710 08:19:03 -- nvme/functions.sh@59 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:06:15.710 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme1n1 reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.710 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme1n1=()' 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n1 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsze]="0x140000"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nsze]=0x140000 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[ncap]="0x140000"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[ncap]=0x140000 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nuse]="0x140000"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nuse]=0x140000 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nsfeat]=0x14 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nlbaf]="7"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nlbaf]=7 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[flbas]="0x4"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[flbas]=0x4 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mc]="0x3"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[mc]=0x3 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dpc]="0x1f"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[dpc]=0x1f 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dps]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[dps]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nmic]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nmic]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[rescap]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[rescap]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[fpi]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[fpi]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dlfeat]="1"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[dlfeat]=1 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawun]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nawun]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawupf]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nawupf]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nacwu]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nacwu]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabsn]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nabsn]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabo]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nabo]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabspf]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nabspf]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[noiob]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[noiob]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmcap]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nvmcap]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwg]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[npwg]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwa]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[npwa]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npdg]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[npdg]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npda]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[npda]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.710 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nows]="0"' 00:06:15.710 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nows]=0 00:06:15.710 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mssrl]="128"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[mssrl]=128 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mcl]="128"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[mcl]=128 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[msrc]="127"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[msrc]=127 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nulbaf]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nulbaf]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[anagrpid]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[anagrpid]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsattr]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nsattr]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmsetid]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nvmsetid]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[endgid]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[endgid]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[eui64]=0000000000000000 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:06:15.711 08:19:03 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme1 00:06:15.711 08:19:03 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme1_ns 00:06:15.711 08:19:03 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:11.0 00:06:15.711 08:19:03 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme1 00:06:15.711 08:19:03 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:06:15.711 08:19:03 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@51 -- # pci=0000:00:12.0 00:06:15.711 08:19:03 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:12.0 00:06:15.711 08:19:03 -- scripts/common.sh@18 -- # local i 00:06:15.711 08:19:03 -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:06:15.711 08:19:03 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:15.711 08:19:03 -- scripts/common.sh@27 -- # return 0 00:06:15.711 08:19:03 -- nvme/functions.sh@53 -- # ctrl_dev=nvme2 00:06:15.711 08:19:03 -- nvme/functions.sh@54 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:06:15.711 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme2 reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.711 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme2=()' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme2 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[vid]="0x1b36"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[vid]=0x1b36 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ssvid]="0x1af4"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ssvid]=0x1af4 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 12342 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[sn]="12342 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[sn]='12342 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fr]="8.0.0 "' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fr]='8.0.0 ' 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rab]="6"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rab]=6 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ieee]="525400"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ieee]=525400 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[cmic]="0"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[cmic]=0 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.711 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mdts]="7"' 00:06:15.711 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mdts]=7 00:06:15.711 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[cntlid]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[cntlid]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ver]="0x10400"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ver]=0x10400 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3r]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rtd3r]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3e]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rtd3e]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[oaes]="0x100"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[oaes]=0x100 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ctratt]="0x8000"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ctratt]=0x8000 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rrls]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rrls]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[cntrltype]="1"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[cntrltype]=1 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[crdt1]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[crdt1]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[crdt2]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[crdt2]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[crdt3]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[crdt3]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[nvmsr]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[nvmsr]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[vwci]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[vwci]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mec]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mec]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[oacs]="0x12a"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[oacs]=0x12a 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[acl]="3"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[acl]=3 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[aerl]="3"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[aerl]=3 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[frmw]="0x3"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[frmw]=0x3 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[lpa]="0x7"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[lpa]=0x7 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[elpe]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[elpe]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[npss]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[npss]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[avscc]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[avscc]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[apsta]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[apsta]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[wctemp]="343"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[wctemp]=343 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[cctemp]="373"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[cctemp]=373 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mtfa]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mtfa]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[hmpre]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[hmpre]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[hmmin]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[hmmin]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[tnvmcap]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[tnvmcap]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[unvmcap]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[unvmcap]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rpmbs]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rpmbs]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[edstt]="0"' 00:06:15.712 08:19:03 -- nvme/functions.sh@25 -- # nvme2[edstt]=0 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.712 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.712 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[dsto]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[dsto]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fwug]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fwug]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[kas]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[kas]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[hctma]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[hctma]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mntmt]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mntmt]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mxtmt]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mxtmt]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[sanicap]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[sanicap]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[hmminds]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[hmminds]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[hmmaxd]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[hmmaxd]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[nsetidmax]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[nsetidmax]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[endgidmax]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[endgidmax]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[anatt]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[anatt]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[anacap]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[anacap]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[anagrpmax]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[anagrpmax]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[nanagrpid]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[nanagrpid]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[pels]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[pels]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[domainid]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[domainid]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[megcap]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[megcap]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[sqes]="0x66"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[sqes]=0x66 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[cqes]="0x44"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[cqes]=0x44 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[maxcmd]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[maxcmd]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[nn]="256"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[nn]=256 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[oncs]="0x15d"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[oncs]=0x15d 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fuses]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fuses]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fna]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fna]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[vwc]="0x7"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[vwc]=0x7 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[awun]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[awun]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[awupf]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[awupf]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[icsvscc]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[icsvscc]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[nwpc]="0"' 00:06:15.713 08:19:03 -- nvme/functions.sh@25 -- # nvme2[nwpc]=0 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.713 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.713 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[acwu]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[acwu]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ocfs]="0x3"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ocfs]=0x3 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[sgls]="0x1"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[sgls]=0x1 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[mnan]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[mnan]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[maxdna]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[maxdna]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[maxcna]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[maxcna]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[oaqd]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[oaqd]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ioccsz]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ioccsz]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[iorcsz]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[iorcsz]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[icdoff]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[icdoff]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[fcatt]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[fcatt]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[msdbd]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[msdbd]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ofcs]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ofcs]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2[active_power_workload]="-"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2[active_power_workload]=- 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme2_ns 00:06:15.714 08:19:03 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:06:15.714 08:19:03 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@58 -- # ns_dev=nvme2n1 00:06:15.714 08:19:03 -- nvme/functions.sh@59 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:06:15.714 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme2n1 reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.714 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme2n1=()' 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme2n1 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsze]="0x100000"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nsze]=0x100000 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[ncap]="0x100000"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[ncap]=0x100000 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nuse]="0x100000"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nuse]=0x100000 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nsfeat]=0x14 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nlbaf]="7"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nlbaf]=7 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[flbas]="0x4"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[flbas]=0x4 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[mc]="0x3"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[mc]=0x3 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[dpc]="0x1f"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[dpc]=0x1f 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[dps]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[dps]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nmic]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nmic]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[rescap]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[rescap]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[fpi]="0"' 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[fpi]=0 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.714 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.714 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.714 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[dlfeat]="1"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[dlfeat]=1 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawun]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nawun]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawupf]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nawupf]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nacwu]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nacwu]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabsn]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nabsn]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabo]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nabo]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabspf]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nabspf]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[noiob]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[noiob]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmcap]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nvmcap]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwg]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[npwg]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwa]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[npwa]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[npdg]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[npdg]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[npda]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[npda]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nows]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nows]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[mssrl]="128"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[mssrl]=128 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[mcl]="128"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[mcl]=128 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[msrc]="127"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[msrc]=127 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nulbaf]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nulbaf]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[anagrpid]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[anagrpid]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsattr]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nsattr]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmsetid]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nvmsetid]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[endgid]="0"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[endgid]=0 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[eui64]=0000000000000000 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:06:15.715 08:19:03 -- nvme/functions.sh@25 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.715 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.715 08:19:03 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:06:15.715 08:19:03 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:06:15.715 08:19:03 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@58 -- # ns_dev=nvme2n2 00:06:15.716 08:19:03 -- nvme/functions.sh@59 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:06:15.716 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme2n2 reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.716 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme2n2=()' 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme2n2 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsze]="0x100000"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nsze]=0x100000 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[ncap]="0x100000"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[ncap]=0x100000 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nuse]="0x100000"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nuse]=0x100000 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nsfeat]=0x14 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nlbaf]="7"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nlbaf]=7 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[flbas]="0x4"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[flbas]=0x4 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[mc]="0x3"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[mc]=0x3 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[dpc]="0x1f"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[dpc]=0x1f 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[dps]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[dps]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nmic]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nmic]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[rescap]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[rescap]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[fpi]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[fpi]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[dlfeat]="1"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[dlfeat]=1 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawun]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nawun]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawupf]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nawupf]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nacwu]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nacwu]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabsn]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nabsn]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabo]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nabo]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabspf]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nabspf]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[noiob]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[noiob]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmcap]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nvmcap]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwg]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[npwg]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwa]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[npwa]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[npdg]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[npdg]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[npda]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[npda]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nows]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nows]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[mssrl]="128"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[mssrl]=128 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[mcl]="128"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[mcl]=128 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[msrc]="127"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[msrc]=127 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nulbaf]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nulbaf]=0 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.716 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.716 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[anagrpid]="0"' 00:06:15.716 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[anagrpid]=0 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsattr]="0"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nsattr]=0 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmsetid]="0"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nvmsetid]=0 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[endgid]="0"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[endgid]=0 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[eui64]=0000000000000000 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:06:15.717 08:19:03 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:06:15.717 08:19:03 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@58 -- # ns_dev=nvme2n3 00:06:15.717 08:19:03 -- nvme/functions.sh@59 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:06:15.717 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme2n3 reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.717 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme2n3=()' 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme2n3 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsze]="0x100000"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nsze]=0x100000 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[ncap]="0x100000"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[ncap]=0x100000 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nuse]="0x100000"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nuse]=0x100000 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.717 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:06:15.717 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nsfeat]=0x14 00:06:15.717 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nlbaf]="7"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nlbaf]=7 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[flbas]="0x4"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[flbas]=0x4 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[mc]="0x3"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[mc]=0x3 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[dpc]="0x1f"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[dpc]=0x1f 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[dps]="0"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[dps]=0 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nmic]="0"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nmic]=0 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[rescap]="0"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[rescap]=0 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[fpi]="0"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[fpi]=0 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.978 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.978 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[dlfeat]="1"' 00:06:15.978 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[dlfeat]=1 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawun]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nawun]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawupf]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nawupf]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nacwu]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nacwu]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabsn]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nabsn]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabo]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nabo]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabspf]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nabspf]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[noiob]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[noiob]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmcap]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nvmcap]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwg]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[npwg]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwa]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[npwa]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[npdg]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[npdg]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[npda]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[npda]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nows]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nows]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[mssrl]="128"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[mssrl]=128 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[mcl]="128"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[mcl]=128 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[msrc]="127"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[msrc]=127 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nulbaf]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nulbaf]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[anagrpid]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[anagrpid]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsattr]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nsattr]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmsetid]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nvmsetid]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[endgid]="0"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[endgid]=0 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[eui64]=0000000000000000 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:06:15.979 08:19:03 -- nvme/functions.sh@25 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.979 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.979 08:19:03 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:06:15.979 08:19:03 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme2 00:06:15.980 08:19:03 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme2_ns 00:06:15.980 08:19:03 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:12.0 00:06:15.980 08:19:03 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme2 00:06:15.980 08:19:03 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:06:15.980 08:19:03 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@51 -- # pci=0000:00:13.0 00:06:15.980 08:19:03 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:13.0 00:06:15.980 08:19:03 -- scripts/common.sh@18 -- # local i 00:06:15.980 08:19:03 -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:06:15.980 08:19:03 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:15.980 08:19:03 -- scripts/common.sh@27 -- # return 0 00:06:15.980 08:19:03 -- nvme/functions.sh@53 -- # ctrl_dev=nvme3 00:06:15.980 08:19:03 -- nvme/functions.sh@54 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:06:15.980 08:19:03 -- nvme/functions.sh@19 -- # local ref=nvme3 reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@20 -- # shift 00:06:15.980 08:19:03 -- nvme/functions.sh@22 -- # local -gA 'nvme3=()' 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme3 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[vid]="0x1b36"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[vid]=0x1b36 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ssvid]="0x1af4"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ssvid]=0x1af4 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 12343 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[sn]="12343 "' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[sn]='12343 ' 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fr]="8.0.0 "' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fr]='8.0.0 ' 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rab]="6"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rab]=6 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ieee]="525400"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ieee]=525400 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x2 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[cmic]="0x2"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[cmic]=0x2 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mdts]="7"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mdts]=7 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[cntlid]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[cntlid]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ver]="0x10400"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ver]=0x10400 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3r]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rtd3r]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3e]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rtd3e]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[oaes]="0x100"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[oaes]=0x100 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x88010 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ctratt]="0x88010"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ctratt]=0x88010 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rrls]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rrls]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[cntrltype]="1"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[cntrltype]=1 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[crdt1]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[crdt1]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[crdt2]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[crdt2]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[crdt3]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[crdt3]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[nvmsr]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[nvmsr]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[vwci]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[vwci]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mec]="0"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mec]=0 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[oacs]="0x12a"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[oacs]=0x12a 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[acl]="3"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[acl]=3 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[aerl]="3"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[aerl]=3 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.980 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.980 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[frmw]="0x3"' 00:06:15.980 08:19:03 -- nvme/functions.sh@25 -- # nvme3[frmw]=0x3 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[lpa]="0x7"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[lpa]=0x7 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[elpe]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[elpe]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[npss]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[npss]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[avscc]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[avscc]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[apsta]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[apsta]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[wctemp]="343"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[wctemp]=343 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[cctemp]="373"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[cctemp]=373 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mtfa]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mtfa]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[hmpre]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[hmpre]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[hmmin]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[hmmin]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[tnvmcap]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[tnvmcap]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[unvmcap]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[unvmcap]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rpmbs]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rpmbs]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[edstt]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[edstt]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[dsto]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[dsto]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fwug]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fwug]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[kas]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[kas]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[hctma]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[hctma]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mntmt]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mntmt]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mxtmt]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mxtmt]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[sanicap]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[sanicap]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[hmminds]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[hmminds]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[hmmaxd]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[hmmaxd]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[nsetidmax]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[nsetidmax]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[endgidmax]="1"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[endgidmax]=1 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[anatt]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[anatt]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[anacap]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[anacap]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[anagrpmax]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[anagrpmax]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[nanagrpid]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[nanagrpid]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[pels]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[pels]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[domainid]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[domainid]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[megcap]="0"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[megcap]=0 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.981 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[sqes]="0x66"' 00:06:15.981 08:19:03 -- nvme/functions.sh@25 -- # nvme3[sqes]=0x66 00:06:15.981 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[cqes]="0x44"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[cqes]=0x44 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[maxcmd]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[maxcmd]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[nn]="256"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[nn]=256 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[oncs]="0x15d"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[oncs]=0x15d 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fuses]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fuses]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fna]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fna]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[vwc]="0x7"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[vwc]=0x7 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[awun]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[awun]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[awupf]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[awupf]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[icsvscc]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[icsvscc]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[nwpc]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[nwpc]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[acwu]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[acwu]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ocfs]="0x3"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ocfs]=0x3 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[sgls]="0x1"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[sgls]=0x1 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[mnan]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[mnan]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[maxdna]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[maxdna]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[maxcna]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[maxcna]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[oaqd]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[oaqd]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ioccsz]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ioccsz]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[iorcsz]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[iorcsz]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[icdoff]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[icdoff]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[fcatt]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[fcatt]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[msdbd]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[msdbd]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ofcs]="0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ofcs]=0 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # eval 'nvme3[active_power_workload]="-"' 00:06:15.982 08:19:03 -- nvme/functions.sh@25 -- # nvme3[active_power_workload]=- 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # IFS=: 00:06:15.982 08:19:03 -- nvme/functions.sh@23 -- # read -r reg val 00:06:15.982 08:19:03 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme3_ns 00:06:15.982 08:19:03 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme3 00:06:15.982 08:19:03 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme3_ns 00:06:15.982 08:19:03 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:13.0 00:06:15.982 08:19:03 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme3 00:06:15.982 08:19:03 -- nvme/functions.sh@67 -- # (( 4 > 0 )) 00:06:15.982 08:19:03 -- nvme/functions.sh@219 -- # local _ctrls ctrl 00:06:15.982 08:19:03 -- nvme/functions.sh@220 -- # local unvmcap tnvmcap cntlid size blksize=512 00:06:15.982 08:19:03 -- nvme/functions.sh@222 -- # _ctrls=($(get_nvme_with_ns_management)) 00:06:15.982 08:19:03 -- nvme/functions.sh@222 -- # get_nvme_with_ns_management 00:06:15.982 08:19:03 -- nvme/functions.sh@157 -- # local _ctrls 00:06:15.982 08:19:03 -- nvme/functions.sh@159 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:06:15.982 08:19:03 -- nvme/functions.sh@159 -- # get_nvmes_with_ns_management 00:06:15.982 08:19:03 -- nvme/functions.sh@146 -- # (( 4 == 0 )) 00:06:15.982 08:19:03 -- nvme/functions.sh@148 -- # local ctrl 00:06:15.982 08:19:03 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:06:15.982 08:19:03 -- nvme/functions.sh@150 -- # get_oacs nvme1 nsmgt 00:06:15.982 08:19:03 -- nvme/functions.sh@123 -- # local ctrl=nvme1 bit=nsmgt 00:06:15.982 08:19:03 -- nvme/functions.sh@124 -- # local -A bits 00:06:15.982 08:19:03 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:06:15.982 08:19:03 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:06:15.982 08:19:03 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:06:15.983 08:19:03 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:06:15.983 08:19:03 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:06:15.983 08:19:03 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:06:15.983 08:19:03 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:06:15.983 08:19:03 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:06:15.983 08:19:03 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:06:15.983 08:19:03 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:06:15.983 08:19:03 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:06:15.983 08:19:03 -- nvme/functions.sh@139 -- # bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme1 oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0x12a 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # echo nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # get_oacs nvme0 nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@123 -- # local ctrl=nvme0 bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@124 -- # local -A bits 00:06:15.983 08:19:03 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:06:15.983 08:19:03 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:06:15.983 08:19:03 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:06:15.983 08:19:03 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:06:15.983 08:19:03 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:06:15.983 08:19:03 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:06:15.983 08:19:03 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:06:15.983 08:19:03 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:06:15.983 08:19:03 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:06:15.983 08:19:03 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:06:15.983 08:19:03 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:06:15.983 08:19:03 -- nvme/functions.sh@139 -- # bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme0 oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0x12a 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # echo nvme0 00:06:15.983 08:19:03 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # get_oacs nvme3 nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@123 -- # local ctrl=nvme3 bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@124 -- # local -A bits 00:06:15.983 08:19:03 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:06:15.983 08:19:03 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:06:15.983 08:19:03 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:06:15.983 08:19:03 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:06:15.983 08:19:03 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:06:15.983 08:19:03 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:06:15.983 08:19:03 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:06:15.983 08:19:03 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:06:15.983 08:19:03 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:06:15.983 08:19:03 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:06:15.983 08:19:03 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:06:15.983 08:19:03 -- nvme/functions.sh@139 -- # bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme3 oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme3 reg=oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme3 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme3 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0x12a 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # echo nvme3 00:06:15.983 08:19:03 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # get_oacs nvme2 nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@123 -- # local ctrl=nvme2 bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@124 -- # local -A bits 00:06:15.983 08:19:03 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:06:15.983 08:19:03 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:06:15.983 08:19:03 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:06:15.983 08:19:03 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:06:15.983 08:19:03 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:06:15.983 08:19:03 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:06:15.983 08:19:03 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:06:15.983 08:19:03 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:06:15.983 08:19:03 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:06:15.983 08:19:03 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:06:15.983 08:19:03 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:06:15.983 08:19:03 -- nvme/functions.sh@139 -- # bit=nsmgt 00:06:15.983 08:19:03 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme2 oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme2 reg=oacs 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme2 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme2 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0x12a 00:06:15.983 08:19:03 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:06:15.983 08:19:03 -- nvme/functions.sh@150 -- # echo nvme2 00:06:15.983 08:19:03 -- nvme/functions.sh@153 -- # return 0 00:06:15.983 08:19:03 -- nvme/functions.sh@160 -- # (( 4 > 0 )) 00:06:15.983 08:19:03 -- nvme/functions.sh@161 -- # echo nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@162 -- # return 0 00:06:15.983 08:19:03 -- nvme/functions.sh@224 -- # for ctrl in "${_ctrls[@]}" 00:06:15.983 08:19:03 -- nvme/functions.sh@229 -- # get_nvme_ctrl_feature nvme1 unvmcap 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=unvmcap 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0 00:06:15.983 08:19:03 -- nvme/functions.sh@229 -- # unvmcap=0 00:06:15.983 08:19:03 -- nvme/functions.sh@230 -- # get_nvme_ctrl_feature nvme1 tnvmcap 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=tnvmcap 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0 00:06:15.983 08:19:03 -- nvme/functions.sh@230 -- # tnvmcap=0 00:06:15.983 08:19:03 -- nvme/functions.sh@231 -- # get_nvme_ctrl_feature nvme1 cntlid 00:06:15.983 08:19:03 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=cntlid 00:06:15.983 08:19:03 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:06:15.983 08:19:03 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:06:15.983 08:19:03 -- nvme/functions.sh@78 -- # echo 0 00:06:15.983 08:19:03 -- nvme/functions.sh@231 -- # cntlid=0 00:06:15.983 08:19:03 -- nvme/functions.sh@232 -- # (( unvmcap == 0 )) 00:06:15.983 08:19:03 -- nvme/functions.sh@234 -- # continue 00:06:15.983 08:19:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:15.983 08:19:03 -- common/autotest_common.sh@735 -- # xtrace_disable 00:06:15.983 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.983 08:19:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:15.983 08:19:03 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:15.983 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.983 08:19:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:16.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.489 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.489 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.489 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.748 08:19:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:17.748 08:19:05 -- common/autotest_common.sh@735 -- # xtrace_disable 00:06:17.748 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:17.748 08:19:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:17.748 08:19:05 -- common/autotest_common.sh@1521 -- # local bdfs bdf bdf_id 00:06:17.748 08:19:05 -- common/autotest_common.sh@1523 -- # mapfile -t bdfs 00:06:17.748 08:19:05 -- common/autotest_common.sh@1523 -- # get_nvme_bdfs_by_id 0x0a54 00:06:17.748 08:19:05 -- common/autotest_common.sh@1505 -- # bdfs=() 00:06:17.748 08:19:05 -- common/autotest_common.sh@1505 -- # _bdfs=() 00:06:17.748 08:19:05 -- common/autotest_common.sh@1505 -- # local bdfs _bdfs bdf 00:06:17.748 08:19:05 -- common/autotest_common.sh@1506 -- # _bdfs=($(get_nvme_bdfs)) 00:06:17.748 08:19:05 -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:06:17.748 08:19:05 -- common/autotest_common.sh@1486 -- # bdfs=() 00:06:17.748 08:19:05 -- common/autotest_common.sh@1486 -- # local bdfs 00:06:17.748 08:19:05 -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.748 08:19:05 -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.748 08:19:05 -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:06:17.748 08:19:05 -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:06:17.748 08:19:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:17.748 08:19:05 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # device=0x0010 00:06:17.748 08:19:05 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.748 08:19:05 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # device=0x0010 00:06:17.748 08:19:05 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.748 08:19:05 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # device=0x0010 00:06:17.748 08:19:05 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.748 08:19:05 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:17.748 08:19:05 -- common/autotest_common.sh@1509 -- # device=0x0010 00:06:17.748 08:19:05 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.748 08:19:05 -- common/autotest_common.sh@1515 -- # (( 0 > 0 )) 00:06:17.748 08:19:05 -- common/autotest_common.sh@1515 -- # return 0 00:06:17.748 08:19:05 -- common/autotest_common.sh@1524 -- # [[ -z '' ]] 00:06:17.748 08:19:05 -- common/autotest_common.sh@1525 -- # return 0 00:06:17.748 08:19:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:17.748 08:19:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:17.748 08:19:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.748 08:19:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.748 08:19:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:17.748 08:19:05 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:17.748 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:17.748 08:19:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:17.748 08:19:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.748 08:19:05 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:17.748 08:19:05 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:17.748 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.007 ************************************ 00:06:18.007 START TEST env 00:06:18.007 ************************************ 00:06:18.007 08:19:05 env -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.007 * Looking for test storage... 00:06:18.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:18.007 08:19:05 env -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:18.007 08:19:05 env -- common/autotest_common.sh@1638 -- # lcov --version 00:06:18.007 08:19:05 env -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:18.007 08:19:05 env -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:18.007 08:19:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.007 08:19:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.007 08:19:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.007 08:19:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.007 08:19:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.007 08:19:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.007 08:19:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.007 08:19:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.007 08:19:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.007 08:19:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.007 08:19:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.007 08:19:05 env -- scripts/common.sh@344 -- # case "$op" in 00:06:18.007 08:19:05 env -- scripts/common.sh@345 -- # : 1 00:06:18.007 08:19:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.007 08:19:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.266 08:19:05 env -- scripts/common.sh@365 -- # decimal 1 00:06:18.266 08:19:05 env -- scripts/common.sh@353 -- # local d=1 00:06:18.266 08:19:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.266 08:19:05 env -- scripts/common.sh@355 -- # echo 1 00:06:18.266 08:19:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.266 08:19:05 env -- scripts/common.sh@366 -- # decimal 2 00:06:18.266 08:19:05 env -- scripts/common.sh@353 -- # local d=2 00:06:18.266 08:19:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.266 08:19:05 env -- scripts/common.sh@355 -- # echo 2 00:06:18.266 08:19:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.266 08:19:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.266 08:19:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.267 08:19:05 env -- scripts/common.sh@368 -- # return 0 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:18.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.267 --rc genhtml_branch_coverage=1 00:06:18.267 --rc genhtml_function_coverage=1 00:06:18.267 --rc genhtml_legend=1 00:06:18.267 --rc geninfo_all_blocks=1 00:06:18.267 --rc geninfo_unexecuted_blocks=1 00:06:18.267 00:06:18.267 ' 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:18.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.267 --rc genhtml_branch_coverage=1 00:06:18.267 --rc genhtml_function_coverage=1 00:06:18.267 --rc genhtml_legend=1 00:06:18.267 --rc geninfo_all_blocks=1 00:06:18.267 --rc geninfo_unexecuted_blocks=1 00:06:18.267 00:06:18.267 ' 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:18.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.267 --rc genhtml_branch_coverage=1 00:06:18.267 --rc genhtml_function_coverage=1 00:06:18.267 --rc genhtml_legend=1 00:06:18.267 --rc geninfo_all_blocks=1 00:06:18.267 --rc geninfo_unexecuted_blocks=1 00:06:18.267 00:06:18.267 ' 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:18.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.267 --rc genhtml_branch_coverage=1 00:06:18.267 --rc genhtml_function_coverage=1 00:06:18.267 --rc genhtml_legend=1 00:06:18.267 --rc geninfo_all_blocks=1 00:06:18.267 --rc geninfo_unexecuted_blocks=1 00:06:18.267 00:06:18.267 ' 00:06:18.267 08:19:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:18.267 08:19:05 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:18.267 08:19:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.267 ************************************ 00:06:18.267 START TEST env_memory 00:06:18.267 ************************************ 00:06:18.267 08:19:05 env.env_memory -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.267 00:06:18.267 00:06:18.267 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.267 http://cunit.sourceforge.net/ 00:06:18.267 00:06:18.267 00:06:18.267 Suite: memory 00:06:18.267 Test: alloc and free memory map ...[2024-11-20 08:19:05.672354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:18.267 passed 00:06:18.267 Test: mem map translation ...[2024-11-20 08:19:05.716931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:18.267 [2024-11-20 08:19:05.716993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:18.267 [2024-11-20 08:19:05.717062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:18.267 [2024-11-20 08:19:05.717087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:18.267 passed 00:06:18.267 Test: mem map registration ...[2024-11-20 08:19:05.785043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:18.267 [2024-11-20 08:19:05.785102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:18.267 passed 00:06:18.526 Test: mem map adjacent registrations ...passed 00:06:18.526 00:06:18.526 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.526 suites 1 1 n/a 0 0 00:06:18.526 tests 4 4 4 0 0 00:06:18.526 asserts 152 152 152 0 n/a 00:06:18.526 00:06:18.526 Elapsed time = 0.243 seconds 00:06:18.526 00:06:18.526 real 0m0.298s 00:06:18.526 user 0m0.265s 00:06:18.526 sys 0m0.023s 00:06:18.526 08:19:05 env.env_memory -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:18.526 08:19:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:18.526 ************************************ 00:06:18.526 END TEST env_memory 00:06:18.526 ************************************ 00:06:18.526 08:19:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.526 08:19:05 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:18.526 08:19:05 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:18.526 08:19:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.526 ************************************ 00:06:18.526 START TEST env_vtophys 00:06:18.526 ************************************ 00:06:18.526 08:19:05 env.env_vtophys -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.526 EAL: lib.eal log level changed from notice to debug 00:06:18.526 EAL: Detected lcore 0 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 1 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 2 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 3 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 4 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 5 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 6 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 7 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 8 as core 0 on socket 0 00:06:18.526 EAL: Detected lcore 9 as core 0 on socket 0 00:06:18.526 EAL: Maximum logical cores by configuration: 128 00:06:18.526 EAL: Detected CPU lcores: 10 00:06:18.526 EAL: Detected NUMA nodes: 1 00:06:18.526 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:18.526 EAL: Detected shared linkage of DPDK 00:06:18.527 EAL: No shared files mode enabled, IPC will be disabled 00:06:18.527 EAL: Selected IOVA mode 'PA' 00:06:18.527 EAL: Probing VFIO support... 00:06:18.527 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.527 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:18.527 EAL: Ask a virtual area of 0x2e000 bytes 00:06:18.527 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:18.527 EAL: Setting up physically contiguous memory... 00:06:18.527 EAL: Setting maximum number of open files to 524288 00:06:18.527 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:18.527 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:18.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.527 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:18.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.527 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:18.527 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:18.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.527 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:18.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.527 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:18.527 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:18.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.527 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:18.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.527 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.527 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.527 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.527 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.527 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.527 EAL: Hugepages will be freed exactly as allocated. 00:06:18.527 EAL: No shared files mode enabled, IPC is disabled 00:06:18.527 EAL: No shared files mode enabled, IPC is disabled 00:06:18.785 EAL: TSC frequency is ~2490000 KHz 00:06:18.785 EAL: Main lcore 0 is ready (tid=7f0ec0d27a40;cpuset=[0]) 00:06:18.785 EAL: Trying to obtain current memory policy. 00:06:18.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.785 EAL: Restoring previous memory policy: 0 00:06:18.785 EAL: request: mp_malloc_sync 00:06:18.785 EAL: No shared files mode enabled, IPC is disabled 00:06:18.785 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.785 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.785 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:18.785 EAL: Mem event callback 'spdk:(nil)' registered 00:06:18.785 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:18.785 00:06:18.785 00:06:18.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.785 http://cunit.sourceforge.net/ 00:06:18.785 00:06:18.785 00:06:18.785 Suite: components_suite 00:06:19.355 Test: vtophys_malloc_test ...passed 00:06:19.355 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.355 EAL: Restoring previous memory policy: 4 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was expanded by 4MB 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was shrunk by 4MB 00:06:19.355 EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.355 EAL: Restoring previous memory policy: 4 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was expanded by 6MB 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was shrunk by 6MB 00:06:19.355 EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.355 EAL: Restoring previous memory policy: 4 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was expanded by 10MB 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was shrunk by 10MB 00:06:19.355 EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.355 EAL: Restoring previous memory policy: 4 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was expanded by 18MB 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was shrunk by 18MB 00:06:19.355 EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.355 EAL: Restoring previous memory policy: 4 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was expanded by 34MB 00:06:19.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.355 EAL: request: mp_malloc_sync 00:06:19.355 EAL: No shared files mode enabled, IPC is disabled 00:06:19.355 EAL: Heap on socket 0 was shrunk by 34MB 00:06:19.355 EAL: Trying to obtain current memory policy. 00:06:19.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.614 EAL: Restoring previous memory policy: 4 00:06:19.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.614 EAL: request: mp_malloc_sync 00:06:19.614 EAL: No shared files mode enabled, IPC is disabled 00:06:19.614 EAL: Heap on socket 0 was expanded by 66MB 00:06:19.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.614 EAL: request: mp_malloc_sync 00:06:19.614 EAL: No shared files mode enabled, IPC is disabled 00:06:19.614 EAL: Heap on socket 0 was shrunk by 66MB 00:06:19.614 EAL: Trying to obtain current memory policy. 00:06:19.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.874 EAL: Restoring previous memory policy: 4 00:06:19.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.874 EAL: request: mp_malloc_sync 00:06:19.874 EAL: No shared files mode enabled, IPC is disabled 00:06:19.874 EAL: Heap on socket 0 was expanded by 130MB 00:06:19.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.874 EAL: request: mp_malloc_sync 00:06:19.874 EAL: No shared files mode enabled, IPC is disabled 00:06:19.874 EAL: Heap on socket 0 was shrunk by 130MB 00:06:20.133 EAL: Trying to obtain current memory policy. 00:06:20.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.133 EAL: Restoring previous memory policy: 4 00:06:20.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.133 EAL: request: mp_malloc_sync 00:06:20.133 EAL: No shared files mode enabled, IPC is disabled 00:06:20.133 EAL: Heap on socket 0 was expanded by 258MB 00:06:20.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.701 EAL: request: mp_malloc_sync 00:06:20.701 EAL: No shared files mode enabled, IPC is disabled 00:06:20.701 EAL: Heap on socket 0 was shrunk by 258MB 00:06:21.269 EAL: Trying to obtain current memory policy. 00:06:21.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.269 EAL: Restoring previous memory policy: 4 00:06:21.269 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.269 EAL: request: mp_malloc_sync 00:06:21.269 EAL: No shared files mode enabled, IPC is disabled 00:06:21.269 EAL: Heap on socket 0 was expanded by 514MB 00:06:22.205 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.205 EAL: request: mp_malloc_sync 00:06:22.205 EAL: No shared files mode enabled, IPC is disabled 00:06:22.205 EAL: Heap on socket 0 was shrunk by 514MB 00:06:23.141 EAL: Trying to obtain current memory policy. 00:06:23.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.399 EAL: Restoring previous memory policy: 4 00:06:23.399 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.399 EAL: request: mp_malloc_sync 00:06:23.399 EAL: No shared files mode enabled, IPC is disabled 00:06:23.399 EAL: Heap on socket 0 was expanded by 1026MB 00:06:25.310 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.310 EAL: request: mp_malloc_sync 00:06:25.310 EAL: No shared files mode enabled, IPC is disabled 00:06:25.310 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:27.215 passed 00:06:27.215 00:06:27.215 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.215 suites 1 1 n/a 0 0 00:06:27.215 tests 2 2 2 0 0 00:06:27.215 asserts 4676 4676 4676 0 n/a 00:06:27.215 00:06:27.215 Elapsed time = 8.150 seconds 00:06:27.215 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.215 EAL: request: mp_malloc_sync 00:06:27.215 EAL: No shared files mode enabled, IPC is disabled 00:06:27.215 EAL: Heap on socket 0 was shrunk by 2MB 00:06:27.215 EAL: No shared files mode enabled, IPC is disabled 00:06:27.215 EAL: No shared files mode enabled, IPC is disabled 00:06:27.215 EAL: No shared files mode enabled, IPC is disabled 00:06:27.215 00:06:27.215 real 0m8.490s 00:06:27.215 user 0m7.465s 00:06:27.215 sys 0m0.865s 00:06:27.215 08:19:14 env.env_vtophys -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:27.215 08:19:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:27.215 ************************************ 00:06:27.215 END TEST env_vtophys 00:06:27.215 ************************************ 00:06:27.215 08:19:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:27.215 08:19:14 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:27.215 08:19:14 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:27.215 08:19:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.215 ************************************ 00:06:27.215 START TEST env_pci 00:06:27.215 ************************************ 00:06:27.215 08:19:14 env.env_pci -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:27.215 00:06:27.215 00:06:27.215 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.215 http://cunit.sourceforge.net/ 00:06:27.215 00:06:27.215 00:06:27.215 Suite: pci 00:06:27.215 Test: pci_hook ...[2024-11-20 08:19:14.580092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57110 has claimed it 00:06:27.215 passed 00:06:27.215 00:06:27.215 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.215 suites 1 1 n/a 0 0 00:06:27.215 tests 1 1 1 0 0 00:06:27.215 asserts 25 25 25 0 n/a 00:06:27.215 00:06:27.215 Elapsed time = 0.008 seconds 00:06:27.215 EAL: Cannot find device (10000:00:01.0) 00:06:27.215 EAL: Failed to attach device on primary process 00:06:27.215 00:06:27.215 real 0m0.112s 00:06:27.215 user 0m0.040s 00:06:27.215 sys 0m0.071s 00:06:27.215 08:19:14 env.env_pci -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:27.215 08:19:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:27.215 ************************************ 00:06:27.215 END TEST env_pci 00:06:27.215 ************************************ 00:06:27.216 08:19:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:27.216 08:19:14 env -- env/env.sh@15 -- # uname 00:06:27.216 08:19:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:27.216 08:19:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:27.216 08:19:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:27.216 08:19:14 env -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:06:27.216 08:19:14 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:27.216 08:19:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.216 ************************************ 00:06:27.216 START TEST env_dpdk_post_init 00:06:27.216 ************************************ 00:06:27.216 08:19:14 env.env_dpdk_post_init -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:27.475 EAL: Detected CPU lcores: 10 00:06:27.475 EAL: Detected NUMA nodes: 1 00:06:27.475 EAL: Detected shared linkage of DPDK 00:06:27.475 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:27.475 EAL: Selected IOVA mode 'PA' 00:06:27.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:27.475 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:27.475 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:27.475 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:27.475 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:27.475 Starting DPDK initialization... 00:06:27.475 Starting SPDK post initialization... 00:06:27.475 SPDK NVMe probe 00:06:27.475 Attaching to 0000:00:10.0 00:06:27.475 Attaching to 0000:00:11.0 00:06:27.475 Attaching to 0000:00:12.0 00:06:27.475 Attaching to 0000:00:13.0 00:06:27.475 Attached to 0000:00:10.0 00:06:27.475 Attached to 0000:00:11.0 00:06:27.475 Attached to 0000:00:13.0 00:06:27.475 Attached to 0000:00:12.0 00:06:27.475 Cleaning up... 00:06:27.475 00:06:27.475 real 0m0.305s 00:06:27.475 user 0m0.102s 00:06:27.475 sys 0m0.106s 00:06:27.475 08:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:27.475 ************************************ 00:06:27.475 END TEST env_dpdk_post_init 00:06:27.475 ************************************ 00:06:27.475 08:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.734 08:19:15 env -- env/env.sh@26 -- # uname 00:06:27.734 08:19:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:27.734 08:19:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.734 08:19:15 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:27.734 08:19:15 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:27.734 08:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.734 ************************************ 00:06:27.734 START TEST env_mem_callbacks 00:06:27.734 ************************************ 00:06:27.734 08:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.734 EAL: Detected CPU lcores: 10 00:06:27.734 EAL: Detected NUMA nodes: 1 00:06:27.734 EAL: Detected shared linkage of DPDK 00:06:27.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:27.734 EAL: Selected IOVA mode 'PA' 00:06:27.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:27.734 00:06:27.734 00:06:27.734 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.734 http://cunit.sourceforge.net/ 00:06:27.734 00:06:27.734 00:06:27.734 Suite: memory 00:06:27.734 Test: test ... 00:06:27.734 register 0x200000200000 2097152 00:06:27.734 malloc 3145728 00:06:27.993 register 0x200000400000 4194304 00:06:27.993 buf 0x2000004fffc0 len 3145728 PASSED 00:06:27.993 malloc 64 00:06:27.993 buf 0x2000004ffec0 len 64 PASSED 00:06:27.993 malloc 4194304 00:06:27.993 register 0x200000800000 6291456 00:06:27.993 buf 0x2000009fffc0 len 4194304 PASSED 00:06:27.993 free 0x2000004fffc0 3145728 00:06:27.993 free 0x2000004ffec0 64 00:06:27.993 unregister 0x200000400000 4194304 PASSED 00:06:27.993 free 0x2000009fffc0 4194304 00:06:27.993 unregister 0x200000800000 6291456 PASSED 00:06:27.993 malloc 8388608 00:06:27.993 register 0x200000400000 10485760 00:06:27.993 buf 0x2000005fffc0 len 8388608 PASSED 00:06:27.993 free 0x2000005fffc0 8388608 00:06:27.993 unregister 0x200000400000 10485760 PASSED 00:06:27.993 passed 00:06:27.993 00:06:27.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.993 suites 1 1 n/a 0 0 00:06:27.993 tests 1 1 1 0 0 00:06:27.993 asserts 15 15 15 0 n/a 00:06:27.993 00:06:27.993 Elapsed time = 0.081 seconds 00:06:27.993 00:06:27.993 real 0m0.292s 00:06:27.993 user 0m0.111s 00:06:27.993 sys 0m0.080s 00:06:27.993 08:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:27.993 08:19:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:27.993 ************************************ 00:06:27.993 END TEST env_mem_callbacks 00:06:27.993 ************************************ 00:06:27.993 00:06:27.993 real 0m10.139s 00:06:27.993 user 0m8.230s 00:06:27.993 sys 0m1.527s 00:06:27.993 08:19:15 env -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:27.993 08:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.993 ************************************ 00:06:27.993 END TEST env 00:06:27.993 ************************************ 00:06:27.993 08:19:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:27.993 08:19:15 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:27.993 08:19:15 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:27.993 08:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:27.993 ************************************ 00:06:27.993 START TEST rpc 00:06:27.993 ************************************ 00:06:27.993 08:19:15 rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:28.252 * Looking for test storage... 00:06:28.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.252 08:19:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.252 08:19:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.252 08:19:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.252 08:19:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.252 08:19:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.252 08:19:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.252 08:19:15 rpc -- scripts/common.sh@345 -- # : 1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.252 08:19:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.252 08:19:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.252 08:19:15 rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.252 08:19:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.252 08:19:15 rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.252 08:19:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.252 08:19:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.252 08:19:15 rpc -- scripts/common.sh@368 -- # return 0 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.252 --rc genhtml_branch_coverage=1 00:06:28.252 --rc genhtml_function_coverage=1 00:06:28.252 --rc genhtml_legend=1 00:06:28.252 --rc geninfo_all_blocks=1 00:06:28.252 --rc geninfo_unexecuted_blocks=1 00:06:28.252 00:06:28.252 ' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.252 --rc genhtml_branch_coverage=1 00:06:28.252 --rc genhtml_function_coverage=1 00:06:28.252 --rc genhtml_legend=1 00:06:28.252 --rc geninfo_all_blocks=1 00:06:28.252 --rc geninfo_unexecuted_blocks=1 00:06:28.252 00:06:28.252 ' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.252 --rc genhtml_branch_coverage=1 00:06:28.252 --rc genhtml_function_coverage=1 00:06:28.252 --rc genhtml_legend=1 00:06:28.252 --rc geninfo_all_blocks=1 00:06:28.252 --rc geninfo_unexecuted_blocks=1 00:06:28.252 00:06:28.252 ' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.252 --rc genhtml_branch_coverage=1 00:06:28.252 --rc genhtml_function_coverage=1 00:06:28.252 --rc genhtml_legend=1 00:06:28.252 --rc geninfo_all_blocks=1 00:06:28.252 --rc geninfo_unexecuted_blocks=1 00:06:28.252 00:06:28.252 ' 00:06:28.252 08:19:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57243 00:06:28.252 08:19:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.252 08:19:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57243 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@838 -- # '[' -z 57243 ']' 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:06:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.252 08:19:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:06:28.252 08:19:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.511 [2024-11-20 08:19:15.860111] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:28.511 [2024-11-20 08:19:15.860252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57243 ] 00:06:28.511 [2024-11-20 08:19:16.030588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.770 [2024-11-20 08:19:16.148228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:28.770 [2024-11-20 08:19:16.148292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57243' to capture a snapshot of events at runtime. 00:06:28.770 [2024-11-20 08:19:16.148307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.770 [2024-11-20 08:19:16.148321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.770 [2024-11-20 08:19:16.148331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57243 for offline analysis/debug. 00:06:28.770 [2024-11-20 08:19:16.149631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.706 08:19:17 rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:06:29.706 08:19:17 rpc -- common/autotest_common.sh@871 -- # return 0 00:06:29.706 08:19:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.706 08:19:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.706 08:19:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:29.706 08:19:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:29.706 08:19:17 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:29.706 08:19:17 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:29.706 08:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.706 ************************************ 00:06:29.706 START TEST rpc_integrity 00:06:29.706 ************************************ 00:06:29.706 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1132 -- # rpc_integrity 00:06:29.706 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:29.707 { 00:06:29.707 "name": "Malloc0", 00:06:29.707 "aliases": [ 00:06:29.707 "c6d65bac-b880-44b0-8e58-ff75dc05b0af" 00:06:29.707 ], 00:06:29.707 "product_name": "Malloc disk", 00:06:29.707 "block_size": 512, 00:06:29.707 "num_blocks": 16384, 00:06:29.707 "uuid": "c6d65bac-b880-44b0-8e58-ff75dc05b0af", 00:06:29.707 "assigned_rate_limits": { 00:06:29.707 "rw_ios_per_sec": 0, 00:06:29.707 "rw_mbytes_per_sec": 0, 00:06:29.707 "r_mbytes_per_sec": 0, 00:06:29.707 "w_mbytes_per_sec": 0 00:06:29.707 }, 00:06:29.707 "claimed": false, 00:06:29.707 "zoned": false, 00:06:29.707 "supported_io_types": { 00:06:29.707 "read": true, 00:06:29.707 "write": true, 00:06:29.707 "unmap": true, 00:06:29.707 "flush": true, 00:06:29.707 "reset": true, 00:06:29.707 "nvme_admin": false, 00:06:29.707 "nvme_io": false, 00:06:29.707 "nvme_io_md": false, 00:06:29.707 "write_zeroes": true, 00:06:29.707 "zcopy": true, 00:06:29.707 "get_zone_info": false, 00:06:29.707 "zone_management": false, 00:06:29.707 "zone_append": false, 00:06:29.707 "compare": false, 00:06:29.707 "compare_and_write": false, 00:06:29.707 "abort": true, 00:06:29.707 "seek_hole": false, 00:06:29.707 "seek_data": false, 00:06:29.707 "copy": true, 00:06:29.707 "nvme_iov_md": false 00:06:29.707 }, 00:06:29.707 "memory_domains": [ 00:06:29.707 { 00:06:29.707 "dma_device_id": "system", 00:06:29.707 "dma_device_type": 1 00:06:29.707 }, 00:06:29.707 { 00:06:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.707 "dma_device_type": 2 00:06:29.707 } 00:06:29.707 ], 00:06:29.707 "driver_specific": {} 00:06:29.707 } 00:06:29.707 ]' 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 [2024-11-20 08:19:17.216126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:29.707 [2024-11-20 08:19:17.216210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.707 [2024-11-20 08:19:17.216244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:29.707 [2024-11-20 08:19:17.216261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.707 [2024-11-20 08:19:17.218916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.707 [2024-11-20 08:19:17.218970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:29.707 Passthru0 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:29.707 { 00:06:29.707 "name": "Malloc0", 00:06:29.707 "aliases": [ 00:06:29.707 "c6d65bac-b880-44b0-8e58-ff75dc05b0af" 00:06:29.707 ], 00:06:29.707 "product_name": "Malloc disk", 00:06:29.707 "block_size": 512, 00:06:29.707 "num_blocks": 16384, 00:06:29.707 "uuid": "c6d65bac-b880-44b0-8e58-ff75dc05b0af", 00:06:29.707 "assigned_rate_limits": { 00:06:29.707 "rw_ios_per_sec": 0, 00:06:29.707 "rw_mbytes_per_sec": 0, 00:06:29.707 "r_mbytes_per_sec": 0, 00:06:29.707 "w_mbytes_per_sec": 0 00:06:29.707 }, 00:06:29.707 "claimed": true, 00:06:29.707 "claim_type": "exclusive_write", 00:06:29.707 "zoned": false, 00:06:29.707 "supported_io_types": { 00:06:29.707 "read": true, 00:06:29.707 "write": true, 00:06:29.707 "unmap": true, 00:06:29.707 "flush": true, 00:06:29.707 "reset": true, 00:06:29.707 "nvme_admin": false, 00:06:29.707 "nvme_io": false, 00:06:29.707 "nvme_io_md": false, 00:06:29.707 "write_zeroes": true, 00:06:29.707 "zcopy": true, 00:06:29.707 "get_zone_info": false, 00:06:29.707 "zone_management": false, 00:06:29.707 "zone_append": false, 00:06:29.707 "compare": false, 00:06:29.707 "compare_and_write": false, 00:06:29.707 "abort": true, 00:06:29.707 "seek_hole": false, 00:06:29.707 "seek_data": false, 00:06:29.707 "copy": true, 00:06:29.707 "nvme_iov_md": false 00:06:29.707 }, 00:06:29.707 "memory_domains": [ 00:06:29.707 { 00:06:29.707 "dma_device_id": "system", 00:06:29.707 "dma_device_type": 1 00:06:29.707 }, 00:06:29.707 { 00:06:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.707 "dma_device_type": 2 00:06:29.707 } 00:06:29.707 ], 00:06:29.707 "driver_specific": {} 00:06:29.707 }, 00:06:29.707 { 00:06:29.707 "name": "Passthru0", 00:06:29.707 "aliases": [ 00:06:29.707 "b579f61f-2768-5569-afd0-13382b46329e" 00:06:29.707 ], 00:06:29.707 "product_name": "passthru", 00:06:29.707 "block_size": 512, 00:06:29.707 "num_blocks": 16384, 00:06:29.707 "uuid": "b579f61f-2768-5569-afd0-13382b46329e", 00:06:29.707 "assigned_rate_limits": { 00:06:29.707 "rw_ios_per_sec": 0, 00:06:29.707 "rw_mbytes_per_sec": 0, 00:06:29.707 "r_mbytes_per_sec": 0, 00:06:29.707 "w_mbytes_per_sec": 0 00:06:29.707 }, 00:06:29.707 "claimed": false, 00:06:29.707 "zoned": false, 00:06:29.707 "supported_io_types": { 00:06:29.707 "read": true, 00:06:29.707 "write": true, 00:06:29.707 "unmap": true, 00:06:29.707 "flush": true, 00:06:29.707 "reset": true, 00:06:29.707 "nvme_admin": false, 00:06:29.707 "nvme_io": false, 00:06:29.707 "nvme_io_md": false, 00:06:29.707 "write_zeroes": true, 00:06:29.707 "zcopy": true, 00:06:29.707 "get_zone_info": false, 00:06:29.707 "zone_management": false, 00:06:29.707 "zone_append": false, 00:06:29.707 "compare": false, 00:06:29.707 "compare_and_write": false, 00:06:29.707 "abort": true, 00:06:29.707 "seek_hole": false, 00:06:29.707 "seek_data": false, 00:06:29.707 "copy": true, 00:06:29.707 "nvme_iov_md": false 00:06:29.707 }, 00:06:29.707 "memory_domains": [ 00:06:29.707 { 00:06:29.707 "dma_device_id": "system", 00:06:29.707 "dma_device_type": 1 00:06:29.707 }, 00:06:29.707 { 00:06:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.707 "dma_device_type": 2 00:06:29.707 } 00:06:29.707 ], 00:06:29.707 "driver_specific": { 00:06:29.707 "passthru": { 00:06:29.707 "name": "Passthru0", 00:06:29.707 "base_bdev_name": "Malloc0" 00:06:29.707 } 00:06:29.707 } 00:06:29.707 } 00:06:29.707 ]' 00:06:29.707 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:29.966 08:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:29.966 00:06:29.966 real 0m0.339s 00:06:29.966 user 0m0.168s 00:06:29.966 sys 0m0.067s 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 ************************************ 00:06:29.966 END TEST rpc_integrity 00:06:29.966 ************************************ 00:06:29.966 08:19:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:29.966 08:19:17 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:29.966 08:19:17 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:29.966 08:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 ************************************ 00:06:29.966 START TEST rpc_plugins 00:06:29.966 ************************************ 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1132 -- # rpc_plugins 00:06:29.966 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.966 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:29.966 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.966 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:29.966 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:29.966 { 00:06:29.966 "name": "Malloc1", 00:06:29.966 "aliases": [ 00:06:29.966 "c6a51915-548b-407e-9537-8573de495af0" 00:06:29.966 ], 00:06:29.966 "product_name": "Malloc disk", 00:06:29.966 "block_size": 4096, 00:06:29.966 "num_blocks": 256, 00:06:29.966 "uuid": "c6a51915-548b-407e-9537-8573de495af0", 00:06:29.966 "assigned_rate_limits": { 00:06:29.966 "rw_ios_per_sec": 0, 00:06:29.966 "rw_mbytes_per_sec": 0, 00:06:29.966 "r_mbytes_per_sec": 0, 00:06:29.966 "w_mbytes_per_sec": 0 00:06:29.966 }, 00:06:29.966 "claimed": false, 00:06:29.966 "zoned": false, 00:06:29.966 "supported_io_types": { 00:06:29.966 "read": true, 00:06:29.966 "write": true, 00:06:29.966 "unmap": true, 00:06:29.966 "flush": true, 00:06:29.966 "reset": true, 00:06:29.966 "nvme_admin": false, 00:06:29.966 "nvme_io": false, 00:06:29.966 "nvme_io_md": false, 00:06:29.966 "write_zeroes": true, 00:06:29.966 "zcopy": true, 00:06:29.966 "get_zone_info": false, 00:06:29.966 "zone_management": false, 00:06:29.966 "zone_append": false, 00:06:29.966 "compare": false, 00:06:29.966 "compare_and_write": false, 00:06:29.966 "abort": true, 00:06:29.966 "seek_hole": false, 00:06:29.966 "seek_data": false, 00:06:29.966 "copy": true, 00:06:29.966 "nvme_iov_md": false 00:06:29.966 }, 00:06:29.966 "memory_domains": [ 00:06:29.966 { 00:06:29.966 "dma_device_id": "system", 00:06:29.966 "dma_device_type": 1 00:06:29.966 }, 00:06:29.966 { 00:06:29.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.966 "dma_device_type": 2 00:06:29.966 } 00:06:29.966 ], 00:06:29.966 "driver_specific": {} 00:06:29.966 } 00:06:29.966 ]' 00:06:29.966 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:30.225 08:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:30.225 00:06:30.225 real 0m0.147s 00:06:30.225 user 0m0.085s 00:06:30.225 sys 0m0.023s 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:30.225 08:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.225 ************************************ 00:06:30.225 END TEST rpc_plugins 00:06:30.225 ************************************ 00:06:30.225 08:19:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:30.225 08:19:17 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:30.225 08:19:17 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:30.225 08:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.225 ************************************ 00:06:30.225 START TEST rpc_trace_cmd_test 00:06:30.225 ************************************ 00:06:30.225 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1132 -- # rpc_trace_cmd_test 00:06:30.225 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:30.226 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57243", 00:06:30.226 "tpoint_group_mask": "0x8", 00:06:30.226 "iscsi_conn": { 00:06:30.226 "mask": "0x2", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "scsi": { 00:06:30.226 "mask": "0x4", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "bdev": { 00:06:30.226 "mask": "0x8", 00:06:30.226 "tpoint_mask": "0xffffffffffffffff" 00:06:30.226 }, 00:06:30.226 "nvmf_rdma": { 00:06:30.226 "mask": "0x10", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "nvmf_tcp": { 00:06:30.226 "mask": "0x20", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "ftl": { 00:06:30.226 "mask": "0x40", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "blobfs": { 00:06:30.226 "mask": "0x80", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "dsa": { 00:06:30.226 "mask": "0x200", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "thread": { 00:06:30.226 "mask": "0x400", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "nvme_pcie": { 00:06:30.226 "mask": "0x800", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "iaa": { 00:06:30.226 "mask": "0x1000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "nvme_tcp": { 00:06:30.226 "mask": "0x2000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "bdev_nvme": { 00:06:30.226 "mask": "0x4000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "sock": { 00:06:30.226 "mask": "0x8000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "blob": { 00:06:30.226 "mask": "0x10000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "bdev_raid": { 00:06:30.226 "mask": "0x20000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 }, 00:06:30.226 "scheduler": { 00:06:30.226 "mask": "0x40000", 00:06:30.226 "tpoint_mask": "0x0" 00:06:30.226 } 00:06:30.226 }' 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:30.226 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:30.485 00:06:30.485 real 0m0.235s 00:06:30.485 user 0m0.180s 00:06:30.485 sys 0m0.045s 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:30.485 ************************************ 00:06:30.485 END TEST rpc_trace_cmd_test 00:06:30.485 ************************************ 00:06:30.485 08:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 08:19:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:30.485 08:19:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:30.485 08:19:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:30.485 08:19:17 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:30.485 08:19:17 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:30.485 08:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 ************************************ 00:06:30.485 START TEST rpc_daemon_integrity 00:06:30.485 ************************************ 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1132 -- # rpc_integrity 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:30.485 08:19:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.485 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:30.744 { 00:06:30.744 "name": "Malloc2", 00:06:30.744 "aliases": [ 00:06:30.744 "b0c4bd45-04b3-4eee-8030-26d796fad07c" 00:06:30.744 ], 00:06:30.744 "product_name": "Malloc disk", 00:06:30.744 "block_size": 512, 00:06:30.744 "num_blocks": 16384, 00:06:30.744 "uuid": "b0c4bd45-04b3-4eee-8030-26d796fad07c", 00:06:30.744 "assigned_rate_limits": { 00:06:30.744 "rw_ios_per_sec": 0, 00:06:30.744 "rw_mbytes_per_sec": 0, 00:06:30.744 "r_mbytes_per_sec": 0, 00:06:30.744 "w_mbytes_per_sec": 0 00:06:30.744 }, 00:06:30.744 "claimed": false, 00:06:30.744 "zoned": false, 00:06:30.744 "supported_io_types": { 00:06:30.744 "read": true, 00:06:30.744 "write": true, 00:06:30.744 "unmap": true, 00:06:30.744 "flush": true, 00:06:30.744 "reset": true, 00:06:30.744 "nvme_admin": false, 00:06:30.744 "nvme_io": false, 00:06:30.744 "nvme_io_md": false, 00:06:30.744 "write_zeroes": true, 00:06:30.744 "zcopy": true, 00:06:30.744 "get_zone_info": false, 00:06:30.744 "zone_management": false, 00:06:30.744 "zone_append": false, 00:06:30.744 "compare": false, 00:06:30.744 "compare_and_write": false, 00:06:30.744 "abort": true, 00:06:30.744 "seek_hole": false, 00:06:30.744 "seek_data": false, 00:06:30.744 "copy": true, 00:06:30.744 "nvme_iov_md": false 00:06:30.744 }, 00:06:30.744 "memory_domains": [ 00:06:30.744 { 00:06:30.744 "dma_device_id": "system", 00:06:30.744 "dma_device_type": 1 00:06:30.744 }, 00:06:30.744 { 00:06:30.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.744 "dma_device_type": 2 00:06:30.744 } 00:06:30.744 ], 00:06:30.744 "driver_specific": {} 00:06:30.744 } 00:06:30.744 ]' 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 [2024-11-20 08:19:18.120753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:30.744 [2024-11-20 08:19:18.120840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.744 [2024-11-20 08:19:18.120870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:30.744 [2024-11-20 08:19:18.120885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.744 [2024-11-20 08:19:18.123841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.744 [2024-11-20 08:19:18.123890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:30.744 Passthru0 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.744 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:30.744 { 00:06:30.744 "name": "Malloc2", 00:06:30.744 "aliases": [ 00:06:30.744 "b0c4bd45-04b3-4eee-8030-26d796fad07c" 00:06:30.744 ], 00:06:30.744 "product_name": "Malloc disk", 00:06:30.744 "block_size": 512, 00:06:30.744 "num_blocks": 16384, 00:06:30.744 "uuid": "b0c4bd45-04b3-4eee-8030-26d796fad07c", 00:06:30.744 "assigned_rate_limits": { 00:06:30.744 "rw_ios_per_sec": 0, 00:06:30.744 "rw_mbytes_per_sec": 0, 00:06:30.744 "r_mbytes_per_sec": 0, 00:06:30.744 "w_mbytes_per_sec": 0 00:06:30.744 }, 00:06:30.744 "claimed": true, 00:06:30.744 "claim_type": "exclusive_write", 00:06:30.744 "zoned": false, 00:06:30.744 "supported_io_types": { 00:06:30.744 "read": true, 00:06:30.744 "write": true, 00:06:30.744 "unmap": true, 00:06:30.744 "flush": true, 00:06:30.744 "reset": true, 00:06:30.745 "nvme_admin": false, 00:06:30.745 "nvme_io": false, 00:06:30.745 "nvme_io_md": false, 00:06:30.745 "write_zeroes": true, 00:06:30.745 "zcopy": true, 00:06:30.745 "get_zone_info": false, 00:06:30.745 "zone_management": false, 00:06:30.745 "zone_append": false, 00:06:30.745 "compare": false, 00:06:30.745 "compare_and_write": false, 00:06:30.745 "abort": true, 00:06:30.745 "seek_hole": false, 00:06:30.745 "seek_data": false, 00:06:30.745 "copy": true, 00:06:30.745 "nvme_iov_md": false 00:06:30.745 }, 00:06:30.745 "memory_domains": [ 00:06:30.745 { 00:06:30.745 "dma_device_id": "system", 00:06:30.745 "dma_device_type": 1 00:06:30.745 }, 00:06:30.745 { 00:06:30.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.745 "dma_device_type": 2 00:06:30.745 } 00:06:30.745 ], 00:06:30.745 "driver_specific": {} 00:06:30.745 }, 00:06:30.745 { 00:06:30.745 "name": "Passthru0", 00:06:30.745 "aliases": [ 00:06:30.745 "76bb5907-767f-5362-ac81-b72078bdc2d1" 00:06:30.745 ], 00:06:30.745 "product_name": "passthru", 00:06:30.745 "block_size": 512, 00:06:30.745 "num_blocks": 16384, 00:06:30.745 "uuid": "76bb5907-767f-5362-ac81-b72078bdc2d1", 00:06:30.745 "assigned_rate_limits": { 00:06:30.745 "rw_ios_per_sec": 0, 00:06:30.745 "rw_mbytes_per_sec": 0, 00:06:30.745 "r_mbytes_per_sec": 0, 00:06:30.745 "w_mbytes_per_sec": 0 00:06:30.745 }, 00:06:30.745 "claimed": false, 00:06:30.745 "zoned": false, 00:06:30.745 "supported_io_types": { 00:06:30.745 "read": true, 00:06:30.745 "write": true, 00:06:30.745 "unmap": true, 00:06:30.745 "flush": true, 00:06:30.745 "reset": true, 00:06:30.745 "nvme_admin": false, 00:06:30.745 "nvme_io": false, 00:06:30.745 "nvme_io_md": false, 00:06:30.745 "write_zeroes": true, 00:06:30.745 "zcopy": true, 00:06:30.745 "get_zone_info": false, 00:06:30.745 "zone_management": false, 00:06:30.745 "zone_append": false, 00:06:30.745 "compare": false, 00:06:30.745 "compare_and_write": false, 00:06:30.745 "abort": true, 00:06:30.745 "seek_hole": false, 00:06:30.745 "seek_data": false, 00:06:30.745 "copy": true, 00:06:30.745 "nvme_iov_md": false 00:06:30.745 }, 00:06:30.745 "memory_domains": [ 00:06:30.745 { 00:06:30.745 "dma_device_id": "system", 00:06:30.745 "dma_device_type": 1 00:06:30.745 }, 00:06:30.745 { 00:06:30.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.745 "dma_device_type": 2 00:06:30.745 } 00:06:30.745 ], 00:06:30.745 "driver_specific": { 00:06:30.745 "passthru": { 00:06:30.745 "name": "Passthru0", 00:06:30.745 "base_bdev_name": "Malloc2" 00:06:30.745 } 00:06:30.745 } 00:06:30.745 } 00:06:30.745 ]' 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:30.745 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:31.005 08:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:31.005 00:06:31.005 real 0m0.330s 00:06:31.005 user 0m0.179s 00:06:31.005 sys 0m0.049s 00:06:31.005 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:31.005 ************************************ 00:06:31.005 END TEST rpc_daemon_integrity 00:06:31.005 ************************************ 00:06:31.005 08:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.005 08:19:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:31.005 08:19:18 rpc -- rpc/rpc.sh@84 -- # killprocess 57243 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@957 -- # '[' -z 57243 ']' 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@961 -- # kill -0 57243 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@962 -- # uname 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57243 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:06:31.005 killing process with pid 57243 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57243' 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@976 -- # kill 57243 00:06:31.005 08:19:18 rpc -- common/autotest_common.sh@981 -- # wait 57243 00:06:33.605 00:06:33.605 real 0m5.481s 00:06:33.605 user 0m5.888s 00:06:33.605 sys 0m0.983s 00:06:33.605 08:19:20 rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:33.605 08:19:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.605 ************************************ 00:06:33.605 END TEST rpc 00:06:33.605 ************************************ 00:06:33.605 08:19:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:33.605 08:19:21 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:33.605 08:19:21 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:33.605 08:19:21 -- common/autotest_common.sh@10 -- # set +x 00:06:33.605 ************************************ 00:06:33.605 START TEST skip_rpc 00:06:33.605 ************************************ 00:06:33.605 08:19:21 skip_rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:33.864 * Looking for test storage... 00:06:33.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.864 08:19:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:33.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.864 --rc genhtml_branch_coverage=1 00:06:33.864 --rc genhtml_function_coverage=1 00:06:33.864 --rc genhtml_legend=1 00:06:33.864 --rc geninfo_all_blocks=1 00:06:33.864 --rc geninfo_unexecuted_blocks=1 00:06:33.864 00:06:33.864 ' 00:06:33.864 08:19:21 skip_rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:33.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.864 --rc genhtml_branch_coverage=1 00:06:33.864 --rc genhtml_function_coverage=1 00:06:33.864 --rc genhtml_legend=1 00:06:33.864 --rc geninfo_all_blocks=1 00:06:33.864 --rc geninfo_unexecuted_blocks=1 00:06:33.865 00:06:33.865 ' 00:06:33.865 08:19:21 skip_rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.865 --rc genhtml_branch_coverage=1 00:06:33.865 --rc genhtml_function_coverage=1 00:06:33.865 --rc genhtml_legend=1 00:06:33.865 --rc geninfo_all_blocks=1 00:06:33.865 --rc geninfo_unexecuted_blocks=1 00:06:33.865 00:06:33.865 ' 00:06:33.865 08:19:21 skip_rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.865 --rc genhtml_branch_coverage=1 00:06:33.865 --rc genhtml_function_coverage=1 00:06:33.865 --rc genhtml_legend=1 00:06:33.865 --rc geninfo_all_blocks=1 00:06:33.865 --rc geninfo_unexecuted_blocks=1 00:06:33.865 00:06:33.865 ' 00:06:33.865 08:19:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.865 08:19:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.865 08:19:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:33.865 08:19:21 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:33.865 08:19:21 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:33.865 08:19:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.865 ************************************ 00:06:33.865 START TEST skip_rpc 00:06:33.865 ************************************ 00:06:33.865 08:19:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1132 -- # test_skip_rpc 00:06:33.865 08:19:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57484 00:06:33.865 08:19:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:33.865 08:19:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.865 08:19:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:34.124 [2024-11-20 08:19:21.466604] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:34.124 [2024-11-20 08:19:21.466730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57484 ] 00:06:34.124 [2024-11-20 08:19:21.652521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.383 [2024-11-20 08:19:21.787334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # local es=0 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@658 -- # rpc_cmd spdk_get_version 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@658 -- # es=1 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57484 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' -z 57484 ']' 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill -0 57484 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # uname 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57484 00:06:39.655 killing process with pid 57484 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57484' 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # kill 57484 00:06:39.655 08:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@981 -- # wait 57484 00:06:41.560 00:06:41.560 real 0m7.647s 00:06:41.560 user 0m7.007s 00:06:41.560 sys 0m0.557s 00:06:41.560 ************************************ 00:06:41.560 END TEST skip_rpc 00:06:41.560 ************************************ 00:06:41.560 08:19:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:41.560 08:19:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 08:19:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:41.560 08:19:29 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:41.560 08:19:29 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:41.560 08:19:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 ************************************ 00:06:41.560 START TEST skip_rpc_with_json 00:06:41.560 ************************************ 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1132 -- # test_skip_rpc_with_json 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57593 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57593 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # '[' -z 57593 ']' 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@843 -- # local max_retries=100 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@847 -- # xtrace_disable 00:06:41.560 08:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.819 [2024-11-20 08:19:29.176474] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:41.819 [2024-11-20 08:19:29.176615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57593 ] 00:06:41.819 [2024-11-20 08:19:29.360591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.077 [2024-11-20 08:19:29.501237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@871 -- # return 0 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.015 [2024-11-20 08:19:30.499849] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:43.015 request: 00:06:43.015 { 00:06:43.015 "trtype": "tcp", 00:06:43.015 "method": "nvmf_get_transports", 00:06:43.015 "req_id": 1 00:06:43.015 } 00:06:43.015 Got JSON-RPC error response 00:06:43.015 response: 00:06:43.015 { 00:06:43.015 "code": -19, 00:06:43.015 "message": "No such device" 00:06:43.015 } 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.015 [2024-11-20 08:19:30.515974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:06:43.015 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.274 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:06:43.274 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.274 { 00:06:43.274 "subsystems": [ 00:06:43.274 { 00:06:43.274 "subsystem": "fsdev", 00:06:43.274 "config": [ 00:06:43.274 { 00:06:43.274 "method": "fsdev_set_opts", 00:06:43.274 "params": { 00:06:43.274 "fsdev_io_pool_size": 65535, 00:06:43.274 "fsdev_io_cache_size": 256 00:06:43.274 } 00:06:43.274 } 00:06:43.274 ] 00:06:43.274 }, 00:06:43.274 { 00:06:43.274 "subsystem": "keyring", 00:06:43.274 "config": [] 00:06:43.274 }, 00:06:43.274 { 00:06:43.274 "subsystem": "iobuf", 00:06:43.274 "config": [ 00:06:43.274 { 00:06:43.274 "method": "iobuf_set_options", 00:06:43.274 "params": { 00:06:43.274 "small_pool_count": 8192, 00:06:43.274 "large_pool_count": 1024, 00:06:43.274 "small_bufsize": 8192, 00:06:43.274 "large_bufsize": 135168, 00:06:43.274 "enable_numa": false 00:06:43.274 } 00:06:43.274 } 00:06:43.274 ] 00:06:43.274 }, 00:06:43.274 { 00:06:43.274 "subsystem": "sock", 00:06:43.274 "config": [ 00:06:43.274 { 00:06:43.274 "method": "sock_set_default_impl", 00:06:43.274 "params": { 00:06:43.274 "impl_name": "posix" 00:06:43.274 } 00:06:43.274 }, 00:06:43.274 { 00:06:43.274 "method": "sock_impl_set_options", 00:06:43.274 "params": { 00:06:43.274 "impl_name": "ssl", 00:06:43.274 "recv_buf_size": 4096, 00:06:43.274 "send_buf_size": 4096, 00:06:43.274 "enable_recv_pipe": true, 00:06:43.274 "enable_quickack": false, 00:06:43.274 "enable_placement_id": 0, 00:06:43.274 "enable_zerocopy_send_server": true, 00:06:43.274 "enable_zerocopy_send_client": false, 00:06:43.275 "zerocopy_threshold": 0, 00:06:43.275 "tls_version": 0, 00:06:43.275 "enable_ktls": false 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "sock_impl_set_options", 00:06:43.275 "params": { 00:06:43.275 "impl_name": "posix", 00:06:43.275 "recv_buf_size": 2097152, 00:06:43.275 "send_buf_size": 2097152, 00:06:43.275 "enable_recv_pipe": true, 00:06:43.275 "enable_quickack": false, 00:06:43.275 "enable_placement_id": 0, 00:06:43.275 "enable_zerocopy_send_server": true, 00:06:43.275 "enable_zerocopy_send_client": false, 00:06:43.275 "zerocopy_threshold": 0, 00:06:43.275 "tls_version": 0, 00:06:43.275 "enable_ktls": false 00:06:43.275 } 00:06:43.275 } 00:06:43.275 ] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "vmd", 00:06:43.275 "config": [] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "accel", 00:06:43.275 "config": [ 00:06:43.275 { 00:06:43.275 "method": "accel_set_options", 00:06:43.275 "params": { 00:06:43.275 "small_cache_size": 128, 00:06:43.275 "large_cache_size": 16, 00:06:43.275 "task_count": 2048, 00:06:43.275 "sequence_count": 2048, 00:06:43.275 "buf_count": 2048 00:06:43.275 } 00:06:43.275 } 00:06:43.275 ] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "bdev", 00:06:43.275 "config": [ 00:06:43.275 { 00:06:43.275 "method": "bdev_set_options", 00:06:43.275 "params": { 00:06:43.275 "bdev_io_pool_size": 65535, 00:06:43.275 "bdev_io_cache_size": 256, 00:06:43.275 "bdev_auto_examine": true, 00:06:43.275 "iobuf_small_cache_size": 128, 00:06:43.275 "iobuf_large_cache_size": 16 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "bdev_raid_set_options", 00:06:43.275 "params": { 00:06:43.275 "process_window_size_kb": 1024, 00:06:43.275 "process_max_bandwidth_mb_sec": 0 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "bdev_iscsi_set_options", 00:06:43.275 "params": { 00:06:43.275 "timeout_sec": 30 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "bdev_nvme_set_options", 00:06:43.275 "params": { 00:06:43.275 "action_on_timeout": "none", 00:06:43.275 "timeout_us": 0, 00:06:43.275 "timeout_admin_us": 0, 00:06:43.275 "keep_alive_timeout_ms": 10000, 00:06:43.275 "arbitration_burst": 0, 00:06:43.275 "low_priority_weight": 0, 00:06:43.275 "medium_priority_weight": 0, 00:06:43.275 "high_priority_weight": 0, 00:06:43.275 "nvme_adminq_poll_period_us": 10000, 00:06:43.275 "nvme_ioq_poll_period_us": 0, 00:06:43.275 "io_queue_requests": 0, 00:06:43.275 "delay_cmd_submit": true, 00:06:43.275 "transport_retry_count": 4, 00:06:43.275 "bdev_retry_count": 3, 00:06:43.275 "transport_ack_timeout": 0, 00:06:43.275 "ctrlr_loss_timeout_sec": 0, 00:06:43.275 "reconnect_delay_sec": 0, 00:06:43.275 "fast_io_fail_timeout_sec": 0, 00:06:43.275 "disable_auto_failback": false, 00:06:43.275 "generate_uuids": false, 00:06:43.275 "transport_tos": 0, 00:06:43.275 "nvme_error_stat": false, 00:06:43.275 "rdma_srq_size": 0, 00:06:43.275 "io_path_stat": false, 00:06:43.275 "allow_accel_sequence": false, 00:06:43.275 "rdma_max_cq_size": 0, 00:06:43.275 "rdma_cm_event_timeout_ms": 0, 00:06:43.275 "dhchap_digests": [ 00:06:43.275 "sha256", 00:06:43.275 "sha384", 00:06:43.275 "sha512" 00:06:43.275 ], 00:06:43.275 "dhchap_dhgroups": [ 00:06:43.275 "null", 00:06:43.275 "ffdhe2048", 00:06:43.275 "ffdhe3072", 00:06:43.275 "ffdhe4096", 00:06:43.275 "ffdhe6144", 00:06:43.275 "ffdhe8192" 00:06:43.275 ] 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "bdev_nvme_set_hotplug", 00:06:43.275 "params": { 00:06:43.275 "period_us": 100000, 00:06:43.275 "enable": false 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "bdev_wait_for_examine" 00:06:43.275 } 00:06:43.275 ] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "scsi", 00:06:43.275 "config": null 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "scheduler", 00:06:43.275 "config": [ 00:06:43.275 { 00:06:43.275 "method": "framework_set_scheduler", 00:06:43.275 "params": { 00:06:43.275 "name": "static" 00:06:43.275 } 00:06:43.275 } 00:06:43.275 ] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "vhost_scsi", 00:06:43.275 "config": [] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "vhost_blk", 00:06:43.275 "config": [] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "ublk", 00:06:43.275 "config": [] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "nbd", 00:06:43.275 "config": [] 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "subsystem": "nvmf", 00:06:43.275 "config": [ 00:06:43.275 { 00:06:43.275 "method": "nvmf_set_config", 00:06:43.275 "params": { 00:06:43.275 "discovery_filter": "match_any", 00:06:43.275 "admin_cmd_passthru": { 00:06:43.275 "identify_ctrlr": false 00:06:43.275 }, 00:06:43.275 "dhchap_digests": [ 00:06:43.275 "sha256", 00:06:43.275 "sha384", 00:06:43.275 "sha512" 00:06:43.275 ], 00:06:43.275 "dhchap_dhgroups": [ 00:06:43.275 "null", 00:06:43.275 "ffdhe2048", 00:06:43.275 "ffdhe3072", 00:06:43.275 "ffdhe4096", 00:06:43.275 "ffdhe6144", 00:06:43.275 "ffdhe8192" 00:06:43.275 ] 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "nvmf_set_max_subsystems", 00:06:43.275 "params": { 00:06:43.275 "max_subsystems": 1024 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "nvmf_set_crdt", 00:06:43.275 "params": { 00:06:43.275 "crdt1": 0, 00:06:43.275 "crdt2": 0, 00:06:43.275 "crdt3": 0 00:06:43.275 } 00:06:43.275 }, 00:06:43.275 { 00:06:43.275 "method": "nvmf_create_transport", 00:06:43.275 "params": { 00:06:43.275 "trtype": "TCP", 00:06:43.275 "max_queue_depth": 128, 00:06:43.275 "max_io_qpairs_per_ctrlr": 127, 00:06:43.275 "in_capsule_data_size": 4096, 00:06:43.275 "max_io_size": 131072, 00:06:43.276 "io_unit_size": 131072, 00:06:43.276 "max_aq_depth": 128, 00:06:43.276 "num_shared_buffers": 511, 00:06:43.276 "buf_cache_size": 4294967295, 00:06:43.276 "dif_insert_or_strip": false, 00:06:43.276 "zcopy": false, 00:06:43.276 "c2h_success": true, 00:06:43.276 "sock_priority": 0, 00:06:43.276 "abort_timeout_sec": 1, 00:06:43.276 "ack_timeout": 0, 00:06:43.276 "data_wr_pool_size": 0 00:06:43.276 } 00:06:43.276 } 00:06:43.276 ] 00:06:43.276 }, 00:06:43.276 { 00:06:43.276 "subsystem": "iscsi", 00:06:43.276 "config": [ 00:06:43.276 { 00:06:43.276 "method": "iscsi_set_options", 00:06:43.276 "params": { 00:06:43.276 "node_base": "iqn.2016-06.io.spdk", 00:06:43.276 "max_sessions": 128, 00:06:43.276 "max_connections_per_session": 2, 00:06:43.276 "max_queue_depth": 64, 00:06:43.276 "default_time2wait": 2, 00:06:43.276 "default_time2retain": 20, 00:06:43.276 "first_burst_length": 8192, 00:06:43.276 "immediate_data": true, 00:06:43.276 "allow_duplicated_isid": false, 00:06:43.276 "error_recovery_level": 0, 00:06:43.276 "nop_timeout": 60, 00:06:43.276 "nop_in_interval": 30, 00:06:43.276 "disable_chap": false, 00:06:43.276 "require_chap": false, 00:06:43.276 "mutual_chap": false, 00:06:43.276 "chap_group": 0, 00:06:43.276 "max_large_datain_per_connection": 64, 00:06:43.276 "max_r2t_per_connection": 4, 00:06:43.276 "pdu_pool_size": 36864, 00:06:43.276 "immediate_data_pool_size": 16384, 00:06:43.276 "data_out_pool_size": 2048 00:06:43.276 } 00:06:43.276 } 00:06:43.276 ] 00:06:43.276 } 00:06:43.276 ] 00:06:43.276 } 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57593 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' -z 57593 ']' 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill -0 57593 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # uname 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57593 00:06:43.276 killing process with pid 57593 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57593' 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # kill 57593 00:06:43.276 08:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@981 -- # wait 57593 00:06:45.809 08:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57654 00:06:45.809 08:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.809 08:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57654 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' -z 57654 ']' 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill -0 57654 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # uname 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57654 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57654' 00:06:51.081 killing process with pid 57654 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # kill 57654 00:06:51.081 08:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@981 -- # wait 57654 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.614 ************************************ 00:06:53.614 END TEST skip_rpc_with_json 00:06:53.614 ************************************ 00:06:53.614 00:06:53.614 real 0m11.869s 00:06:53.614 user 0m10.975s 00:06:53.614 sys 0m1.215s 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.614 08:19:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.614 08:19:40 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:53.614 08:19:40 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:53.614 08:19:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.614 ************************************ 00:06:53.614 START TEST skip_rpc_with_delay 00:06:53.614 ************************************ 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1132 -- # test_skip_rpc_with_delay 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # local es=0 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:53.614 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.614 08:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:53.614 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.614 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:53.614 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.614 [2024-11-20 08:19:41.115208] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:53.873 ************************************ 00:06:53.873 END TEST skip_rpc_with_delay 00:06:53.873 ************************************ 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@658 -- # es=1 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:53.873 00:06:53.873 real 0m0.210s 00:06:53.873 user 0m0.090s 00:06:53.873 sys 0m0.117s 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:53.873 08:19:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:53.873 08:19:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:53.873 08:19:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:53.873 08:19:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:53.873 08:19:41 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:53.873 08:19:41 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:53.873 08:19:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.873 ************************************ 00:06:53.873 START TEST exit_on_failed_rpc_init 00:06:53.873 ************************************ 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1132 -- # test_exit_on_failed_rpc_init 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57783 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57783 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # '[' -z 57783 ']' 00:06:53.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@843 -- # local max_retries=100 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@847 -- # xtrace_disable 00:06:53.873 08:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.873 [2024-11-20 08:19:41.403335] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:53.873 [2024-11-20 08:19:41.403501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:06:54.133 [2024-11-20 08:19:41.587914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.392 [2024-11-20 08:19:41.740360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@871 -- # return 0 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # local es=0 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:55.328 08:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.586 [2024-11-20 08:19:42.907647] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:55.587 [2024-11-20 08:19:42.908095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57812 ] 00:06:55.587 [2024-11-20 08:19:43.092304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.845 [2024-11-20 08:19:43.235393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.845 [2024-11-20 08:19:43.235533] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:55.845 [2024-11-20 08:19:43.235553] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:55.845 [2024-11-20 08:19:43.235576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=234 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@667 -- # es=106 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # case "$es" in 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # es=1 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57783 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' -z 57783 ']' 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill -0 57783 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # uname 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57783 00:06:56.127 killing process with pid 57783 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57783' 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # kill 57783 00:06:56.127 08:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@981 -- # wait 57783 00:06:58.663 00:06:58.663 real 0m4.937s 00:06:58.663 user 0m5.091s 00:06:58.663 sys 0m0.863s 00:06:58.663 08:19:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:58.663 08:19:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:58.663 ************************************ 00:06:58.663 END TEST exit_on_failed_rpc_init 00:06:58.663 ************************************ 00:06:58.922 08:19:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:58.922 00:06:58.922 real 0m25.219s 00:06:58.922 user 0m23.388s 00:06:58.922 sys 0m3.095s 00:06:58.922 08:19:46 skip_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:58.922 08:19:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.922 ************************************ 00:06:58.922 END TEST skip_rpc 00:06:58.922 ************************************ 00:06:58.922 08:19:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:58.922 08:19:46 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:58.922 08:19:46 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:58.922 08:19:46 -- common/autotest_common.sh@10 -- # set +x 00:06:58.922 ************************************ 00:06:58.922 START TEST rpc_client 00:06:58.922 ************************************ 00:06:58.922 08:19:46 rpc_client -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:59.180 * Looking for test storage... 00:06:59.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1638 -- # lcov --version 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.180 08:19:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.180 --rc genhtml_branch_coverage=1 00:06:59.180 --rc genhtml_function_coverage=1 00:06:59.180 --rc genhtml_legend=1 00:06:59.180 --rc geninfo_all_blocks=1 00:06:59.180 --rc geninfo_unexecuted_blocks=1 00:06:59.180 00:06:59.180 ' 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.180 --rc genhtml_branch_coverage=1 00:06:59.180 --rc genhtml_function_coverage=1 00:06:59.180 --rc genhtml_legend=1 00:06:59.180 --rc geninfo_all_blocks=1 00:06:59.180 --rc geninfo_unexecuted_blocks=1 00:06:59.180 00:06:59.180 ' 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.180 --rc genhtml_branch_coverage=1 00:06:59.180 --rc genhtml_function_coverage=1 00:06:59.180 --rc genhtml_legend=1 00:06:59.180 --rc geninfo_all_blocks=1 00:06:59.180 --rc geninfo_unexecuted_blocks=1 00:06:59.180 00:06:59.180 ' 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.180 --rc genhtml_branch_coverage=1 00:06:59.180 --rc genhtml_function_coverage=1 00:06:59.180 --rc genhtml_legend=1 00:06:59.180 --rc geninfo_all_blocks=1 00:06:59.180 --rc geninfo_unexecuted_blocks=1 00:06:59.180 00:06:59.180 ' 00:06:59.180 08:19:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:59.180 OK 00:06:59.180 08:19:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:59.180 00:06:59.180 real 0m0.356s 00:06:59.180 user 0m0.183s 00:06:59.180 sys 0m0.184s 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:59.180 08:19:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:59.180 ************************************ 00:06:59.180 END TEST rpc_client 00:06:59.180 ************************************ 00:06:59.439 08:19:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:59.439 08:19:46 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:59.439 08:19:46 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:59.439 08:19:46 -- common/autotest_common.sh@10 -- # set +x 00:06:59.439 ************************************ 00:06:59.439 START TEST json_config 00:06:59.440 ************************************ 00:06:59.440 08:19:46 json_config -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:59.440 08:19:46 json_config -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:59.440 08:19:46 json_config -- common/autotest_common.sh@1638 -- # lcov --version 00:06:59.440 08:19:46 json_config -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:59.440 08:19:46 json_config -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.440 08:19:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.440 08:19:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.440 08:19:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.440 08:19:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.440 08:19:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.440 08:19:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:59.440 08:19:46 json_config -- scripts/common.sh@345 -- # : 1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.440 08:19:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.440 08:19:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@353 -- # local d=1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.440 08:19:46 json_config -- scripts/common.sh@355 -- # echo 1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.440 08:19:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@353 -- # local d=2 00:06:59.440 08:19:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.440 08:19:46 json_config -- scripts/common.sh@355 -- # echo 2 00:06:59.699 08:19:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.699 08:19:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.699 08:19:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.699 08:19:46 json_config -- scripts/common.sh@368 -- # return 0 00:06:59.699 08:19:46 json_config -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.699 08:19:46 json_config -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.699 --rc genhtml_branch_coverage=1 00:06:59.699 --rc genhtml_function_coverage=1 00:06:59.699 --rc genhtml_legend=1 00:06:59.699 --rc geninfo_all_blocks=1 00:06:59.699 --rc geninfo_unexecuted_blocks=1 00:06:59.699 00:06:59.699 ' 00:06:59.699 08:19:46 json_config -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.699 --rc genhtml_branch_coverage=1 00:06:59.699 --rc genhtml_function_coverage=1 00:06:59.699 --rc genhtml_legend=1 00:06:59.699 --rc geninfo_all_blocks=1 00:06:59.699 --rc geninfo_unexecuted_blocks=1 00:06:59.699 00:06:59.699 ' 00:06:59.699 08:19:46 json_config -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.699 --rc genhtml_branch_coverage=1 00:06:59.699 --rc genhtml_function_coverage=1 00:06:59.699 --rc genhtml_legend=1 00:06:59.699 --rc geninfo_all_blocks=1 00:06:59.699 --rc geninfo_unexecuted_blocks=1 00:06:59.699 00:06:59.699 ' 00:06:59.699 08:19:46 json_config -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.699 --rc genhtml_branch_coverage=1 00:06:59.699 --rc genhtml_function_coverage=1 00:06:59.699 --rc genhtml_legend=1 00:06:59.699 --rc geninfo_all_blocks=1 00:06:59.699 --rc geninfo_unexecuted_blocks=1 00:06:59.699 00:06:59.699 ' 00:06:59.699 08:19:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.699 08:19:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.699 08:19:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.699 08:19:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.699 08:19:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.699 08:19:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.699 08:19:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.699 08:19:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.699 08:19:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.699 08:19:47 json_config -- paths/export.sh@5 -- # export PATH 00:06:59.699 08:19:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.699 08:19:47 json_config -- nvmf/common.sh@51 -- # : 0 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.700 08:19:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.700 WARNING: No tests are enabled so not running JSON configuration tests 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:59.700 08:19:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:59.700 ************************************ 00:06:59.700 END TEST json_config 00:06:59.700 ************************************ 00:06:59.700 00:06:59.700 real 0m0.272s 00:06:59.700 user 0m0.145s 00:06:59.700 sys 0m0.129s 00:06:59.700 08:19:47 json_config -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:59.700 08:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.700 08:19:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:59.700 08:19:47 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:59.700 08:19:47 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:59.700 08:19:47 -- common/autotest_common.sh@10 -- # set +x 00:06:59.700 ************************************ 00:06:59.700 START TEST json_config_extra_key 00:06:59.700 ************************************ 00:06:59.700 08:19:47 json_config_extra_key -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:59.700 08:19:47 json_config_extra_key -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:59.700 08:19:47 json_config_extra_key -- common/autotest_common.sh@1638 -- # lcov --version 00:06:59.700 08:19:47 json_config_extra_key -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:59.959 08:19:47 json_config_extra_key -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:59.959 08:19:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:59.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.960 --rc genhtml_branch_coverage=1 00:06:59.960 --rc genhtml_function_coverage=1 00:06:59.960 --rc genhtml_legend=1 00:06:59.960 --rc geninfo_all_blocks=1 00:06:59.960 --rc geninfo_unexecuted_blocks=1 00:06:59.960 00:06:59.960 ' 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:59.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.960 --rc genhtml_branch_coverage=1 00:06:59.960 --rc genhtml_function_coverage=1 00:06:59.960 --rc genhtml_legend=1 00:06:59.960 --rc geninfo_all_blocks=1 00:06:59.960 --rc geninfo_unexecuted_blocks=1 00:06:59.960 00:06:59.960 ' 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:59.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.960 --rc genhtml_branch_coverage=1 00:06:59.960 --rc genhtml_function_coverage=1 00:06:59.960 --rc genhtml_legend=1 00:06:59.960 --rc geninfo_all_blocks=1 00:06:59.960 --rc geninfo_unexecuted_blocks=1 00:06:59.960 00:06:59.960 ' 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:59.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.960 --rc genhtml_branch_coverage=1 00:06:59.960 --rc genhtml_function_coverage=1 00:06:59.960 --rc genhtml_legend=1 00:06:59.960 --rc geninfo_all_blocks=1 00:06:59.960 --rc geninfo_unexecuted_blocks=1 00:06:59.960 00:06:59.960 ' 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=08ffca0f-7a26-42ee-bf92-3b59f3e32fa7 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.960 08:19:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.960 08:19:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.960 08:19:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.960 08:19:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.960 08:19:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:59.960 08:19:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.960 08:19:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:59.960 INFO: launching applications... 00:06:59.960 08:19:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58040 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:59.960 Waiting for target to run... 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58040 /var/tmp/spdk_tgt.sock 00:06:59.960 08:19:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@838 -- # '[' -z 58040 ']' 00:06:59.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:59.960 08:19:47 json_config_extra_key -- common/autotest_common.sh@843 -- # local max_retries=100 00:06:59.961 08:19:47 json_config_extra_key -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:59.961 08:19:47 json_config_extra_key -- common/autotest_common.sh@847 -- # xtrace_disable 00:06:59.961 08:19:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:59.961 [2024-11-20 08:19:47.506568] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:59.961 [2024-11-20 08:19:47.506725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:07:00.526 [2024-11-20 08:19:48.063537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.784 [2024-11-20 08:19:48.220165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.718 00:07:01.718 INFO: shutting down applications... 00:07:01.718 08:19:49 json_config_extra_key -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:01.718 08:19:49 json_config_extra_key -- common/autotest_common.sh@871 -- # return 0 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:01.718 08:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:01.718 08:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58040 ]] 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58040 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:01.718 08:19:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.285 08:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.285 08:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.285 08:19:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:02.285 08:19:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.543 08:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.543 08:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.543 08:19:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:02.543 08:19:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:03.120 08:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:03.120 08:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.120 08:19:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:03.120 08:19:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:03.748 08:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:03.748 08:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.748 08:19:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:03.748 08:19:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.315 08:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.315 08:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.315 08:19:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:04.315 08:19:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.574 08:19:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.574 08:19:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.575 08:19:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58040 00:07:04.575 08:19:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:04.575 08:19:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:04.575 SPDK target shutdown done 00:07:04.575 08:19:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:04.575 08:19:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:04.575 Success 00:07:04.575 08:19:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:04.575 ************************************ 00:07:04.575 END TEST json_config_extra_key 00:07:04.575 ************************************ 00:07:04.575 00:07:04.575 real 0m4.992s 00:07:04.575 user 0m4.525s 00:07:04.575 sys 0m0.843s 00:07:04.575 08:19:52 json_config_extra_key -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:04.575 08:19:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.833 08:19:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.833 08:19:52 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:04.833 08:19:52 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:04.833 08:19:52 -- common/autotest_common.sh@10 -- # set +x 00:07:04.833 ************************************ 00:07:04.833 START TEST alias_rpc 00:07:04.833 ************************************ 00:07:04.833 08:19:52 alias_rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.833 * Looking for test storage... 00:07:04.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:04.834 08:19:52 alias_rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:04.834 08:19:52 alias_rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:07:04.834 08:19:52 alias_rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:05.093 08:19:52 alias_rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.093 08:19:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.094 08:19:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.094 --rc genhtml_branch_coverage=1 00:07:05.094 --rc genhtml_function_coverage=1 00:07:05.094 --rc genhtml_legend=1 00:07:05.094 --rc geninfo_all_blocks=1 00:07:05.094 --rc geninfo_unexecuted_blocks=1 00:07:05.094 00:07:05.094 ' 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.094 --rc genhtml_branch_coverage=1 00:07:05.094 --rc genhtml_function_coverage=1 00:07:05.094 --rc genhtml_legend=1 00:07:05.094 --rc geninfo_all_blocks=1 00:07:05.094 --rc geninfo_unexecuted_blocks=1 00:07:05.094 00:07:05.094 ' 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.094 --rc genhtml_branch_coverage=1 00:07:05.094 --rc genhtml_function_coverage=1 00:07:05.094 --rc genhtml_legend=1 00:07:05.094 --rc geninfo_all_blocks=1 00:07:05.094 --rc geninfo_unexecuted_blocks=1 00:07:05.094 00:07:05.094 ' 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.094 --rc genhtml_branch_coverage=1 00:07:05.094 --rc genhtml_function_coverage=1 00:07:05.094 --rc genhtml_legend=1 00:07:05.094 --rc geninfo_all_blocks=1 00:07:05.094 --rc geninfo_unexecuted_blocks=1 00:07:05.094 00:07:05.094 ' 00:07:05.094 08:19:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.094 08:19:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58163 00:07:05.094 08:19:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.094 08:19:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58163 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@838 -- # '[' -z 58163 ']' 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:05.094 08:19:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.094 [2024-11-20 08:19:52.573711] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:05.094 [2024-11-20 08:19:52.574037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58163 ] 00:07:05.353 [2024-11-20 08:19:52.759134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.353 [2024-11-20 08:19:52.900585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.729 08:19:53 alias_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:06.729 08:19:53 alias_rpc -- common/autotest_common.sh@871 -- # return 0 00:07:06.729 08:19:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:06.729 08:19:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58163 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@957 -- # '[' -z 58163 ']' 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@961 -- # kill -0 58163 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@962 -- # uname 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58163 00:07:06.729 killing process with pid 58163 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58163' 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@976 -- # kill 58163 00:07:06.729 08:19:54 alias_rpc -- common/autotest_common.sh@981 -- # wait 58163 00:07:09.261 ************************************ 00:07:09.261 END TEST alias_rpc 00:07:09.261 ************************************ 00:07:09.261 00:07:09.261 real 0m4.602s 00:07:09.261 user 0m4.369s 00:07:09.261 sys 0m0.822s 00:07:09.261 08:19:56 alias_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:09.261 08:19:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 08:19:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:09.519 08:19:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:09.519 08:19:56 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:09.519 08:19:56 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:09.519 08:19:56 -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 ************************************ 00:07:09.519 START TEST spdkcli_tcp 00:07:09.519 ************************************ 00:07:09.519 08:19:56 spdkcli_tcp -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:09.519 * Looking for test storage... 00:07:09.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:09.519 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:09.519 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1638 -- # lcov --version 00:07:09.519 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.778 08:19:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.778 --rc genhtml_branch_coverage=1 00:07:09.778 --rc genhtml_function_coverage=1 00:07:09.778 --rc genhtml_legend=1 00:07:09.778 --rc geninfo_all_blocks=1 00:07:09.778 --rc geninfo_unexecuted_blocks=1 00:07:09.778 00:07:09.778 ' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.778 --rc genhtml_branch_coverage=1 00:07:09.778 --rc genhtml_function_coverage=1 00:07:09.778 --rc genhtml_legend=1 00:07:09.778 --rc geninfo_all_blocks=1 00:07:09.778 --rc geninfo_unexecuted_blocks=1 00:07:09.778 00:07:09.778 ' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.778 --rc genhtml_branch_coverage=1 00:07:09.778 --rc genhtml_function_coverage=1 00:07:09.778 --rc genhtml_legend=1 00:07:09.778 --rc geninfo_all_blocks=1 00:07:09.778 --rc geninfo_unexecuted_blocks=1 00:07:09.778 00:07:09.778 ' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.778 --rc genhtml_branch_coverage=1 00:07:09.778 --rc genhtml_function_coverage=1 00:07:09.778 --rc genhtml_legend=1 00:07:09.778 --rc geninfo_all_blocks=1 00:07:09.778 --rc geninfo_unexecuted_blocks=1 00:07:09.778 00:07:09.778 ' 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58282 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:09.778 08:19:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58282 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # '[' -z 58282 ']' 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:09.778 08:19:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.778 [2024-11-20 08:19:57.247092] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:09.778 [2024-11-20 08:19:57.247448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:07:10.040 [2024-11-20 08:19:57.435173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.040 [2024-11-20 08:19:57.573790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.040 [2024-11-20 08:19:57.573828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.424 08:19:58 spdkcli_tcp -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:11.424 08:19:58 spdkcli_tcp -- common/autotest_common.sh@871 -- # return 0 00:07:11.424 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58304 00:07:11.424 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:11.424 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:11.424 [ 00:07:11.424 "bdev_malloc_delete", 00:07:11.424 "bdev_malloc_create", 00:07:11.424 "bdev_null_resize", 00:07:11.424 "bdev_null_delete", 00:07:11.424 "bdev_null_create", 00:07:11.424 "bdev_nvme_cuse_unregister", 00:07:11.424 "bdev_nvme_cuse_register", 00:07:11.424 "bdev_opal_new_user", 00:07:11.424 "bdev_opal_set_lock_state", 00:07:11.424 "bdev_opal_delete", 00:07:11.424 "bdev_opal_get_info", 00:07:11.424 "bdev_opal_create", 00:07:11.424 "bdev_nvme_opal_revert", 00:07:11.424 "bdev_nvme_opal_init", 00:07:11.424 "bdev_nvme_send_cmd", 00:07:11.424 "bdev_nvme_set_keys", 00:07:11.424 "bdev_nvme_get_path_iostat", 00:07:11.424 "bdev_nvme_get_mdns_discovery_info", 00:07:11.424 "bdev_nvme_stop_mdns_discovery", 00:07:11.424 "bdev_nvme_start_mdns_discovery", 00:07:11.424 "bdev_nvme_set_multipath_policy", 00:07:11.424 "bdev_nvme_set_preferred_path", 00:07:11.424 "bdev_nvme_get_io_paths", 00:07:11.424 "bdev_nvme_remove_error_injection", 00:07:11.424 "bdev_nvme_add_error_injection", 00:07:11.424 "bdev_nvme_get_discovery_info", 00:07:11.424 "bdev_nvme_stop_discovery", 00:07:11.424 "bdev_nvme_start_discovery", 00:07:11.424 "bdev_nvme_get_controller_health_info", 00:07:11.424 "bdev_nvme_disable_controller", 00:07:11.424 "bdev_nvme_enable_controller", 00:07:11.424 "bdev_nvme_reset_controller", 00:07:11.424 "bdev_nvme_get_transport_statistics", 00:07:11.424 "bdev_nvme_apply_firmware", 00:07:11.424 "bdev_nvme_detach_controller", 00:07:11.424 "bdev_nvme_get_controllers", 00:07:11.424 "bdev_nvme_attach_controller", 00:07:11.424 "bdev_nvme_set_hotplug", 00:07:11.424 "bdev_nvme_set_options", 00:07:11.424 "bdev_passthru_delete", 00:07:11.424 "bdev_passthru_create", 00:07:11.424 "bdev_lvol_set_parent_bdev", 00:07:11.424 "bdev_lvol_set_parent", 00:07:11.424 "bdev_lvol_check_shallow_copy", 00:07:11.424 "bdev_lvol_start_shallow_copy", 00:07:11.424 "bdev_lvol_grow_lvstore", 00:07:11.424 "bdev_lvol_get_lvols", 00:07:11.424 "bdev_lvol_get_lvstores", 00:07:11.424 "bdev_lvol_delete", 00:07:11.424 "bdev_lvol_set_read_only", 00:07:11.424 "bdev_lvol_resize", 00:07:11.424 "bdev_lvol_decouple_parent", 00:07:11.424 "bdev_lvol_inflate", 00:07:11.424 "bdev_lvol_rename", 00:07:11.424 "bdev_lvol_clone_bdev", 00:07:11.424 "bdev_lvol_clone", 00:07:11.424 "bdev_lvol_snapshot", 00:07:11.424 "bdev_lvol_create", 00:07:11.424 "bdev_lvol_delete_lvstore", 00:07:11.424 "bdev_lvol_rename_lvstore", 00:07:11.424 "bdev_lvol_create_lvstore", 00:07:11.424 "bdev_raid_set_options", 00:07:11.424 "bdev_raid_remove_base_bdev", 00:07:11.424 "bdev_raid_add_base_bdev", 00:07:11.424 "bdev_raid_delete", 00:07:11.424 "bdev_raid_create", 00:07:11.424 "bdev_raid_get_bdevs", 00:07:11.424 "bdev_error_inject_error", 00:07:11.424 "bdev_error_delete", 00:07:11.424 "bdev_error_create", 00:07:11.424 "bdev_split_delete", 00:07:11.424 "bdev_split_create", 00:07:11.424 "bdev_delay_delete", 00:07:11.424 "bdev_delay_create", 00:07:11.424 "bdev_delay_update_latency", 00:07:11.424 "bdev_zone_block_delete", 00:07:11.424 "bdev_zone_block_create", 00:07:11.424 "blobfs_create", 00:07:11.424 "blobfs_detect", 00:07:11.424 "blobfs_set_cache_size", 00:07:11.424 "bdev_xnvme_delete", 00:07:11.424 "bdev_xnvme_create", 00:07:11.424 "bdev_aio_delete", 00:07:11.424 "bdev_aio_rescan", 00:07:11.424 "bdev_aio_create", 00:07:11.424 "bdev_ftl_set_property", 00:07:11.424 "bdev_ftl_get_properties", 00:07:11.424 "bdev_ftl_get_stats", 00:07:11.424 "bdev_ftl_unmap", 00:07:11.424 "bdev_ftl_unload", 00:07:11.424 "bdev_ftl_delete", 00:07:11.424 "bdev_ftl_load", 00:07:11.424 "bdev_ftl_create", 00:07:11.424 "bdev_virtio_attach_controller", 00:07:11.424 "bdev_virtio_scsi_get_devices", 00:07:11.424 "bdev_virtio_detach_controller", 00:07:11.424 "bdev_virtio_blk_set_hotplug", 00:07:11.424 "bdev_iscsi_delete", 00:07:11.424 "bdev_iscsi_create", 00:07:11.424 "bdev_iscsi_set_options", 00:07:11.424 "accel_error_inject_error", 00:07:11.424 "ioat_scan_accel_module", 00:07:11.424 "dsa_scan_accel_module", 00:07:11.424 "iaa_scan_accel_module", 00:07:11.424 "keyring_file_remove_key", 00:07:11.424 "keyring_file_add_key", 00:07:11.424 "keyring_linux_set_options", 00:07:11.424 "fsdev_aio_delete", 00:07:11.424 "fsdev_aio_create", 00:07:11.424 "iscsi_get_histogram", 00:07:11.424 "iscsi_enable_histogram", 00:07:11.424 "iscsi_set_options", 00:07:11.424 "iscsi_get_auth_groups", 00:07:11.424 "iscsi_auth_group_remove_secret", 00:07:11.424 "iscsi_auth_group_add_secret", 00:07:11.424 "iscsi_delete_auth_group", 00:07:11.424 "iscsi_create_auth_group", 00:07:11.424 "iscsi_set_discovery_auth", 00:07:11.424 "iscsi_get_options", 00:07:11.424 "iscsi_target_node_request_logout", 00:07:11.424 "iscsi_target_node_set_redirect", 00:07:11.424 "iscsi_target_node_set_auth", 00:07:11.424 "iscsi_target_node_add_lun", 00:07:11.424 "iscsi_get_stats", 00:07:11.424 "iscsi_get_connections", 00:07:11.424 "iscsi_portal_group_set_auth", 00:07:11.424 "iscsi_start_portal_group", 00:07:11.424 "iscsi_delete_portal_group", 00:07:11.424 "iscsi_create_portal_group", 00:07:11.424 "iscsi_get_portal_groups", 00:07:11.424 "iscsi_delete_target_node", 00:07:11.424 "iscsi_target_node_remove_pg_ig_maps", 00:07:11.424 "iscsi_target_node_add_pg_ig_maps", 00:07:11.424 "iscsi_create_target_node", 00:07:11.424 "iscsi_get_target_nodes", 00:07:11.424 "iscsi_delete_initiator_group", 00:07:11.424 "iscsi_initiator_group_remove_initiators", 00:07:11.424 "iscsi_initiator_group_add_initiators", 00:07:11.425 "iscsi_create_initiator_group", 00:07:11.425 "iscsi_get_initiator_groups", 00:07:11.425 "nvmf_set_crdt", 00:07:11.425 "nvmf_set_config", 00:07:11.425 "nvmf_set_max_subsystems", 00:07:11.425 "nvmf_stop_mdns_prr", 00:07:11.425 "nvmf_publish_mdns_prr", 00:07:11.425 "nvmf_subsystem_get_listeners", 00:07:11.425 "nvmf_subsystem_get_qpairs", 00:07:11.425 "nvmf_subsystem_get_controllers", 00:07:11.425 "nvmf_get_stats", 00:07:11.425 "nvmf_get_transports", 00:07:11.425 "nvmf_create_transport", 00:07:11.425 "nvmf_get_targets", 00:07:11.425 "nvmf_delete_target", 00:07:11.425 "nvmf_create_target", 00:07:11.425 "nvmf_subsystem_allow_any_host", 00:07:11.425 "nvmf_subsystem_set_keys", 00:07:11.425 "nvmf_subsystem_remove_host", 00:07:11.425 "nvmf_subsystem_add_host", 00:07:11.425 "nvmf_ns_remove_host", 00:07:11.425 "nvmf_ns_add_host", 00:07:11.425 "nvmf_subsystem_remove_ns", 00:07:11.425 "nvmf_subsystem_set_ns_ana_group", 00:07:11.425 "nvmf_subsystem_add_ns", 00:07:11.425 "nvmf_subsystem_listener_set_ana_state", 00:07:11.425 "nvmf_discovery_get_referrals", 00:07:11.425 "nvmf_discovery_remove_referral", 00:07:11.425 "nvmf_discovery_add_referral", 00:07:11.425 "nvmf_subsystem_remove_listener", 00:07:11.425 "nvmf_subsystem_add_listener", 00:07:11.425 "nvmf_delete_subsystem", 00:07:11.425 "nvmf_create_subsystem", 00:07:11.425 "nvmf_get_subsystems", 00:07:11.425 "env_dpdk_get_mem_stats", 00:07:11.425 "nbd_get_disks", 00:07:11.425 "nbd_stop_disk", 00:07:11.425 "nbd_start_disk", 00:07:11.425 "ublk_recover_disk", 00:07:11.425 "ublk_get_disks", 00:07:11.425 "ublk_stop_disk", 00:07:11.425 "ublk_start_disk", 00:07:11.425 "ublk_destroy_target", 00:07:11.425 "ublk_create_target", 00:07:11.425 "virtio_blk_create_transport", 00:07:11.425 "virtio_blk_get_transports", 00:07:11.425 "vhost_controller_set_coalescing", 00:07:11.425 "vhost_get_controllers", 00:07:11.425 "vhost_delete_controller", 00:07:11.425 "vhost_create_blk_controller", 00:07:11.425 "vhost_scsi_controller_remove_target", 00:07:11.425 "vhost_scsi_controller_add_target", 00:07:11.425 "vhost_start_scsi_controller", 00:07:11.425 "vhost_create_scsi_controller", 00:07:11.425 "thread_set_cpumask", 00:07:11.425 "scheduler_set_options", 00:07:11.425 "framework_get_governor", 00:07:11.425 "framework_get_scheduler", 00:07:11.425 "framework_set_scheduler", 00:07:11.425 "framework_get_reactors", 00:07:11.425 "thread_get_io_channels", 00:07:11.425 "thread_get_pollers", 00:07:11.425 "thread_get_stats", 00:07:11.425 "framework_monitor_context_switch", 00:07:11.425 "spdk_kill_instance", 00:07:11.425 "log_enable_timestamps", 00:07:11.425 "log_get_flags", 00:07:11.425 "log_clear_flag", 00:07:11.425 "log_set_flag", 00:07:11.425 "log_get_level", 00:07:11.425 "log_set_level", 00:07:11.425 "log_get_print_level", 00:07:11.425 "log_set_print_level", 00:07:11.425 "framework_enable_cpumask_locks", 00:07:11.425 "framework_disable_cpumask_locks", 00:07:11.425 "framework_wait_init", 00:07:11.425 "framework_start_init", 00:07:11.425 "scsi_get_devices", 00:07:11.425 "bdev_get_histogram", 00:07:11.425 "bdev_enable_histogram", 00:07:11.425 "bdev_set_qos_limit", 00:07:11.425 "bdev_set_qd_sampling_period", 00:07:11.425 "bdev_get_bdevs", 00:07:11.425 "bdev_reset_iostat", 00:07:11.425 "bdev_get_iostat", 00:07:11.425 "bdev_examine", 00:07:11.425 "bdev_wait_for_examine", 00:07:11.425 "bdev_set_options", 00:07:11.425 "accel_get_stats", 00:07:11.425 "accel_set_options", 00:07:11.425 "accel_set_driver", 00:07:11.425 "accel_crypto_key_destroy", 00:07:11.425 "accel_crypto_keys_get", 00:07:11.425 "accel_crypto_key_create", 00:07:11.425 "accel_assign_opc", 00:07:11.425 "accel_get_module_info", 00:07:11.425 "accel_get_opc_assignments", 00:07:11.425 "vmd_rescan", 00:07:11.425 "vmd_remove_device", 00:07:11.425 "vmd_enable", 00:07:11.425 "sock_get_default_impl", 00:07:11.425 "sock_set_default_impl", 00:07:11.425 "sock_impl_set_options", 00:07:11.425 "sock_impl_get_options", 00:07:11.425 "iobuf_get_stats", 00:07:11.425 "iobuf_set_options", 00:07:11.425 "keyring_get_keys", 00:07:11.425 "framework_get_pci_devices", 00:07:11.425 "framework_get_config", 00:07:11.425 "framework_get_subsystems", 00:07:11.425 "fsdev_set_opts", 00:07:11.425 "fsdev_get_opts", 00:07:11.425 "trace_get_info", 00:07:11.425 "trace_get_tpoint_group_mask", 00:07:11.425 "trace_disable_tpoint_group", 00:07:11.425 "trace_enable_tpoint_group", 00:07:11.425 "trace_clear_tpoint_mask", 00:07:11.425 "trace_set_tpoint_mask", 00:07:11.425 "notify_get_notifications", 00:07:11.425 "notify_get_types", 00:07:11.425 "spdk_get_version", 00:07:11.425 "rpc_get_methods" 00:07:11.425 ] 00:07:11.425 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.425 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:11.425 08:19:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58282 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' -z 58282 ']' 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill -0 58282 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@962 -- # uname 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58282 00:07:11.425 killing process with pid 58282 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58282' 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@976 -- # kill 58282 00:07:11.425 08:19:58 spdkcli_tcp -- common/autotest_common.sh@981 -- # wait 58282 00:07:13.957 ************************************ 00:07:13.957 END TEST spdkcli_tcp 00:07:13.957 ************************************ 00:07:13.957 00:07:13.957 real 0m4.634s 00:07:13.957 user 0m7.997s 00:07:13.957 sys 0m0.870s 00:07:13.957 08:20:01 spdkcli_tcp -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:13.957 08:20:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.217 08:20:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:14.217 08:20:01 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:14.217 08:20:01 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:14.217 08:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:14.217 ************************************ 00:07:14.217 START TEST dpdk_mem_utility 00:07:14.217 ************************************ 00:07:14.217 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:14.217 * Looking for test storage... 00:07:14.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:14.217 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:14.217 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # lcov --version 00:07:14.217 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:14.477 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.477 08:20:01 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:14.477 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.477 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:14.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.477 --rc genhtml_branch_coverage=1 00:07:14.477 --rc genhtml_function_coverage=1 00:07:14.477 --rc genhtml_legend=1 00:07:14.477 --rc geninfo_all_blocks=1 00:07:14.477 --rc geninfo_unexecuted_blocks=1 00:07:14.477 00:07:14.477 ' 00:07:14.477 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:14.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.477 --rc genhtml_branch_coverage=1 00:07:14.477 --rc genhtml_function_coverage=1 00:07:14.477 --rc genhtml_legend=1 00:07:14.477 --rc geninfo_all_blocks=1 00:07:14.477 --rc geninfo_unexecuted_blocks=1 00:07:14.477 00:07:14.477 ' 00:07:14.477 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:14.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.478 --rc genhtml_branch_coverage=1 00:07:14.478 --rc genhtml_function_coverage=1 00:07:14.478 --rc genhtml_legend=1 00:07:14.478 --rc geninfo_all_blocks=1 00:07:14.478 --rc geninfo_unexecuted_blocks=1 00:07:14.478 00:07:14.478 ' 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.478 --rc genhtml_branch_coverage=1 00:07:14.478 --rc genhtml_function_coverage=1 00:07:14.478 --rc genhtml_legend=1 00:07:14.478 --rc geninfo_all_blocks=1 00:07:14.478 --rc geninfo_unexecuted_blocks=1 00:07:14.478 00:07:14.478 ' 00:07:14.478 08:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:14.478 08:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58415 00:07:14.478 08:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:14.478 08:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58415 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@838 -- # '[' -z 58415 ']' 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:14.478 08:20:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:14.478 [2024-11-20 08:20:01.959073] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:14.478 [2024-11-20 08:20:01.959401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58415 ] 00:07:14.737 [2024-11-20 08:20:02.148136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.737 [2024-11-20 08:20:02.288113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.116 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:16.116 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@871 -- # return 0 00:07:16.116 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:16.116 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:16.116 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:16.116 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:16.116 { 00:07:16.116 "filename": "/tmp/spdk_mem_dump.txt" 00:07:16.116 } 00:07:16.116 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:16.116 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:16.116 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:16.116 1 heaps totaling size 816.000000 MiB 00:07:16.116 size: 816.000000 MiB heap id: 0 00:07:16.116 end heaps---------- 00:07:16.116 9 mempools totaling size 595.772034 MiB 00:07:16.116 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:16.116 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:16.116 size: 92.545471 MiB name: bdev_io_58415 00:07:16.116 size: 50.003479 MiB name: msgpool_58415 00:07:16.116 size: 36.509338 MiB name: fsdev_io_58415 00:07:16.116 size: 21.763794 MiB name: PDU_Pool 00:07:16.116 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:16.116 size: 4.133484 MiB name: evtpool_58415 00:07:16.116 size: 0.026123 MiB name: Session_Pool 00:07:16.116 end mempools------- 00:07:16.116 6 memzones totaling size 4.142822 MiB 00:07:16.116 size: 1.000366 MiB name: RG_ring_0_58415 00:07:16.116 size: 1.000366 MiB name: RG_ring_1_58415 00:07:16.116 size: 1.000366 MiB name: RG_ring_4_58415 00:07:16.116 size: 1.000366 MiB name: RG_ring_5_58415 00:07:16.116 size: 0.125366 MiB name: RG_ring_2_58415 00:07:16.116 size: 0.015991 MiB name: RG_ring_3_58415 00:07:16.116 end memzones------- 00:07:16.116 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:16.116 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:07:16.116 list of free elements. size: 16.789673 MiB 00:07:16.116 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:16.116 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:16.116 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:16.116 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:16.116 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:16.116 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:16.116 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:16.116 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:16.116 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:16.116 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:16.116 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:16.116 element at address: 0x20001ac00000 with size: 0.560242 MiB 00:07:16.116 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:16.116 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:16.116 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:16.116 element at address: 0x200012c00000 with size: 0.443237 MiB 00:07:16.116 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:16.116 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:16.116 list of standard malloc elements. size: 199.289429 MiB 00:07:16.116 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:16.116 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:16.116 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:16.116 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:16.116 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:16.116 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:16.116 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:16.116 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:16.116 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:16.116 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:16.116 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:16.116 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:16.116 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71780 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:16.117 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:16.117 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:16.118 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:16.118 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:16.118 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:16.118 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:16.119 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:16.119 list of memzone associated elements. size: 599.920898 MiB 00:07:16.119 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:16.119 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:16.119 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:16.119 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:16.119 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:16.119 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58415_0 00:07:16.119 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:16.119 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58415_0 00:07:16.119 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:16.119 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58415_0 00:07:16.119 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:16.119 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:16.119 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:16.119 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:16.119 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:16.119 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58415_0 00:07:16.119 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:16.119 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58415 00:07:16.119 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:16.119 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58415 00:07:16.119 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:16.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:16.119 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:16.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:16.119 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:16.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:16.119 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:16.119 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:16.119 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:16.119 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58415 00:07:16.119 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:16.119 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58415 00:07:16.119 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:16.119 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58415 00:07:16.119 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:16.119 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58415 00:07:16.119 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:16.119 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58415 00:07:16.119 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:16.119 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58415 00:07:16.119 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:16.119 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:16.119 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:16.119 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:16.119 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:16.119 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:16.119 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:16.119 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58415 00:07:16.119 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:16.119 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58415 00:07:16.120 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:16.120 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:16.120 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:16.120 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:16.120 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:16.120 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58415 00:07:16.120 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:16.120 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:16.120 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:16.120 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58415 00:07:16.120 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:16.120 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58415 00:07:16.120 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:16.120 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58415 00:07:16.120 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:16.120 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:16.120 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:16.120 08:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58415 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' -z 58415 ']' 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill -0 58415 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@962 -- # uname 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58415 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58415' 00:07:16.120 killing process with pid 58415 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@976 -- # kill 58415 00:07:16.120 08:20:03 dpdk_mem_utility -- common/autotest_common.sh@981 -- # wait 58415 00:07:18.652 00:07:18.652 real 0m4.500s 00:07:18.652 user 0m4.183s 00:07:18.652 sys 0m0.814s 00:07:18.652 08:20:06 dpdk_mem_utility -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:18.652 08:20:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 END TEST dpdk_mem_utility 00:07:18.652 ************************************ 00:07:18.652 08:20:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.652 08:20:06 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:18.652 08:20:06 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:18.652 08:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 START TEST event 00:07:18.652 ************************************ 00:07:18.652 08:20:06 event -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.910 * Looking for test storage... 00:07:18.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1638 -- # lcov --version 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:18.910 08:20:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.910 08:20:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.910 08:20:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.910 08:20:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.910 08:20:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.910 08:20:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.910 08:20:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.910 08:20:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.910 08:20:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.910 08:20:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.910 08:20:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.910 08:20:06 event -- scripts/common.sh@344 -- # case "$op" in 00:07:18.910 08:20:06 event -- scripts/common.sh@345 -- # : 1 00:07:18.910 08:20:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.910 08:20:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.910 08:20:06 event -- scripts/common.sh@365 -- # decimal 1 00:07:18.910 08:20:06 event -- scripts/common.sh@353 -- # local d=1 00:07:18.910 08:20:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.910 08:20:06 event -- scripts/common.sh@355 -- # echo 1 00:07:18.910 08:20:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.910 08:20:06 event -- scripts/common.sh@366 -- # decimal 2 00:07:18.910 08:20:06 event -- scripts/common.sh@353 -- # local d=2 00:07:18.910 08:20:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.910 08:20:06 event -- scripts/common.sh@355 -- # echo 2 00:07:18.910 08:20:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.910 08:20:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.910 08:20:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.910 08:20:06 event -- scripts/common.sh@368 -- # return 0 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.910 --rc geninfo_unexecuted_blocks=1 00:07:18.910 00:07:18.910 ' 00:07:18.910 08:20:06 event -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.911 --rc geninfo_unexecuted_blocks=1 00:07:18.911 00:07:18.911 ' 00:07:18.911 08:20:06 event -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.911 --rc genhtml_branch_coverage=1 00:07:18.911 --rc genhtml_function_coverage=1 00:07:18.911 --rc genhtml_legend=1 00:07:18.911 --rc geninfo_all_blocks=1 00:07:18.911 --rc geninfo_unexecuted_blocks=1 00:07:18.911 00:07:18.911 ' 00:07:18.911 08:20:06 event -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.911 --rc genhtml_branch_coverage=1 00:07:18.911 --rc genhtml_function_coverage=1 00:07:18.911 --rc genhtml_legend=1 00:07:18.911 --rc geninfo_all_blocks=1 00:07:18.911 --rc geninfo_unexecuted_blocks=1 00:07:18.911 00:07:18.911 ' 00:07:18.911 08:20:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:18.911 08:20:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:18.911 08:20:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:18.911 08:20:06 event -- common/autotest_common.sh@1108 -- # '[' 6 -le 1 ']' 00:07:18.911 08:20:06 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:18.911 08:20:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.911 ************************************ 00:07:18.911 START TEST event_perf 00:07:18.911 ************************************ 00:07:18.911 08:20:06 event.event_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:19.169 Running I/O for 1 seconds...[2024-11-20 08:20:06.493022] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:19.169 [2024-11-20 08:20:06.493240] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58535 ] 00:07:19.169 [2024-11-20 08:20:06.678720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.427 [2024-11-20 08:20:06.825663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.427 [2024-11-20 08:20:06.825801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.427 [2024-11-20 08:20:06.825895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.427 Running I/O for 1 seconds...[2024-11-20 08:20:06.825929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.804 00:07:20.804 lcore 0: 203898 00:07:20.804 lcore 1: 203899 00:07:20.804 lcore 2: 203897 00:07:20.804 lcore 3: 203896 00:07:20.804 done. 00:07:20.804 00:07:20.804 real 0m1.648s 00:07:20.804 user 0m4.381s 00:07:20.804 sys 0m0.143s 00:07:20.804 ************************************ 00:07:20.804 END TEST event_perf 00:07:20.804 ************************************ 00:07:20.804 08:20:08 event.event_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:20.804 08:20:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.804 08:20:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.804 08:20:08 event -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:07:20.804 08:20:08 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:20.804 08:20:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.804 ************************************ 00:07:20.804 START TEST event_reactor 00:07:20.804 ************************************ 00:07:20.804 08:20:08 event.event_reactor -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.804 [2024-11-20 08:20:08.211151] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:20.804 [2024-11-20 08:20:08.211266] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:07:21.062 [2024-11-20 08:20:08.389401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.062 [2024-11-20 08:20:08.529318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.439 test_start 00:07:22.439 oneshot 00:07:22.439 tick 100 00:07:22.439 tick 100 00:07:22.439 tick 250 00:07:22.439 tick 100 00:07:22.439 tick 100 00:07:22.439 tick 100 00:07:22.439 tick 250 00:07:22.439 tick 500 00:07:22.439 tick 100 00:07:22.439 tick 100 00:07:22.439 tick 250 00:07:22.439 tick 100 00:07:22.439 tick 100 00:07:22.439 test_end 00:07:22.439 00:07:22.439 real 0m1.618s 00:07:22.439 user 0m1.399s 00:07:22.439 sys 0m0.111s 00:07:22.439 08:20:09 event.event_reactor -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:22.439 08:20:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:22.439 ************************************ 00:07:22.439 END TEST event_reactor 00:07:22.439 ************************************ 00:07:22.439 08:20:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.439 08:20:09 event -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:07:22.439 08:20:09 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:22.439 08:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.439 ************************************ 00:07:22.439 START TEST event_reactor_perf 00:07:22.439 ************************************ 00:07:22.439 08:20:09 event.event_reactor_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.439 [2024-11-20 08:20:09.906798] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:22.439 [2024-11-20 08:20:09.906938] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58611 ] 00:07:22.698 [2024-11-20 08:20:10.093672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.698 [2024-11-20 08:20:10.225160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.076 test_start 00:07:24.076 test_end 00:07:24.076 Performance: 371238 events per second 00:07:24.076 00:07:24.076 real 0m1.609s 00:07:24.076 user 0m1.382s 00:07:24.076 sys 0m0.119s 00:07:24.076 ************************************ 00:07:24.076 END TEST event_reactor_perf 00:07:24.076 ************************************ 00:07:24.076 08:20:11 event.event_reactor_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:24.076 08:20:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.076 08:20:11 event -- event/event.sh@49 -- # uname -s 00:07:24.076 08:20:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:24.076 08:20:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:24.076 08:20:11 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:24.076 08:20:11 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:24.076 08:20:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.076 ************************************ 00:07:24.076 START TEST event_scheduler 00:07:24.076 ************************************ 00:07:24.076 08:20:11 event.event_scheduler -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:24.335 * Looking for test storage... 00:07:24.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:24.335 08:20:11 event.event_scheduler -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:24.335 08:20:11 event.event_scheduler -- common/autotest_common.sh@1638 -- # lcov --version 00:07:24.335 08:20:11 event.event_scheduler -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:24.335 08:20:11 event.event_scheduler -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:24.335 08:20:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.335 08:20:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.335 08:20:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.335 08:20:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.336 08:20:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 08:20:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:24.336 08:20:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58693 00:07:24.336 08:20:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:24.336 08:20:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:24.336 08:20:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58693 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@838 -- # '[' -z 58693 ']' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:24.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:24.336 08:20:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 [2024-11-20 08:20:11.926729] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:24.595 [2024-11-20 08:20:11.926883] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58693 ] 00:07:24.595 [2024-11-20 08:20:12.116570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.853 [2024-11-20 08:20:12.265778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.853 [2024-11-20 08:20:12.265876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.853 [2024-11-20 08:20:12.266055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.853 [2024-11-20 08:20:12.266089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@871 -- # return 0 00:07:25.437 08:20:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:25.437 POWER: Cannot set governor of lcore 0 to userspace 00:07:25.437 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:25.437 POWER: Cannot set governor of lcore 0 to performance 00:07:25.437 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:25.437 POWER: Cannot set governor of lcore 0 to userspace 00:07:25.437 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:25.437 POWER: Cannot set governor of lcore 0 to userspace 00:07:25.437 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:25.437 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:25.437 POWER: Unable to set Power Management Environment for lcore 0 00:07:25.437 [2024-11-20 08:20:12.779869] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:25.437 [2024-11-20 08:20:12.779900] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:25.437 [2024-11-20 08:20:12.779914] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:25.437 [2024-11-20 08:20:12.779938] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:25.437 [2024-11-20 08:20:12.779949] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:25.437 [2024-11-20 08:20:12.779962] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.437 08:20:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.437 08:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.695 [2024-11-20 08:20:13.181808] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:25.695 08:20:13 event.event_scheduler -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.695 08:20:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:25.696 08:20:13 event.event_scheduler -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:25.696 08:20:13 event.event_scheduler -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:25.696 08:20:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 ************************************ 00:07:25.696 START TEST scheduler_create_thread 00:07:25.696 ************************************ 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1132 -- # scheduler_create_thread 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 2 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 3 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 4 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 5 00:07:25.696 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 6 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 7 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 8 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 9 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 10 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:25.954 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:25.955 08:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.891 08:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:26.891 08:20:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:26.891 08:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:26.891 08:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.269 08:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:28.269 08:20:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:28.269 08:20:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:28.269 08:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:28.269 08:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 08:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:29.205 00:07:29.205 real 0m3.387s 00:07:29.205 user 0m0.026s 00:07:29.205 sys 0m0.011s 00:07:29.205 08:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:29.205 08:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 ************************************ 00:07:29.205 END TEST scheduler_create_thread 00:07:29.205 ************************************ 00:07:29.205 08:20:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:29.205 08:20:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58693 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' -z 58693 ']' 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@961 -- # kill -0 58693 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@962 -- # uname 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58693 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:07:29.205 killing process with pid 58693 00:07:29.205 08:20:16 event.event_scheduler -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58693' 00:07:29.206 08:20:16 event.event_scheduler -- common/autotest_common.sh@976 -- # kill 58693 00:07:29.206 08:20:16 event.event_scheduler -- common/autotest_common.sh@981 -- # wait 58693 00:07:29.464 [2024-11-20 08:20:16.964192] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:30.838 00:07:30.838 real 0m6.775s 00:07:30.838 user 0m13.330s 00:07:30.838 sys 0m0.655s 00:07:30.838 08:20:18 event.event_scheduler -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:30.838 08:20:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.838 ************************************ 00:07:30.838 END TEST event_scheduler 00:07:30.838 ************************************ 00:07:30.838 08:20:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:30.838 08:20:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:30.838 08:20:18 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:30.838 08:20:18 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:30.838 08:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.838 ************************************ 00:07:30.838 START TEST app_repeat 00:07:30.838 ************************************ 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@1132 -- # app_repeat_test 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58810 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58810' 00:07:30.838 Process app_repeat pid: 58810 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.838 spdk_app_start Round 0 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:30.838 08:20:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58810 /var/tmp/spdk-nbd.sock 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 58810 ']' 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:30.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:30.838 08:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.097 [2024-11-20 08:20:18.450030] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:31.097 [2024-11-20 08:20:18.450191] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:07:31.097 [2024-11-20 08:20:18.630688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.356 [2024-11-20 08:20:18.797103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.356 [2024-11-20 08:20:18.797133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.921 08:20:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:31.922 08:20:19 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:07:31.922 08:20:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.180 Malloc0 00:07:32.180 08:20:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.440 Malloc1 00:07:32.698 08:20:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.698 08:20:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.699 08:20:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.699 /dev/nbd0 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.958 1+0 records in 00:07:32.958 1+0 records out 00:07:32.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444178 s, 9.2 MB/s 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.958 /dev/nbd1 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.958 08:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.958 08:20:20 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.217 1+0 records in 00:07:33.217 1+0 records out 00:07:33.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390567 s, 10.5 MB/s 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:33.217 08:20:20 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.217 { 00:07:33.217 "nbd_device": "/dev/nbd0", 00:07:33.217 "bdev_name": "Malloc0" 00:07:33.217 }, 00:07:33.217 { 00:07:33.217 "nbd_device": "/dev/nbd1", 00:07:33.217 "bdev_name": "Malloc1" 00:07:33.217 } 00:07:33.217 ]' 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.217 { 00:07:33.217 "nbd_device": "/dev/nbd0", 00:07:33.217 "bdev_name": "Malloc0" 00:07:33.217 }, 00:07:33.217 { 00:07:33.217 "nbd_device": "/dev/nbd1", 00:07:33.217 "bdev_name": "Malloc1" 00:07:33.217 } 00:07:33.217 ]' 00:07:33.217 08:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:33.476 /dev/nbd1' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:33.476 /dev/nbd1' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:33.476 256+0 records in 00:07:33.476 256+0 records out 00:07:33.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109971 s, 95.4 MB/s 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.476 256+0 records in 00:07:33.476 256+0 records out 00:07:33.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311422 s, 33.7 MB/s 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.476 256+0 records in 00:07:33.476 256+0 records out 00:07:33.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357585 s, 29.3 MB/s 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.476 08:20:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.735 08:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.735 08:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.736 08:20:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.994 08:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.253 08:20:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.253 08:20:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.821 08:20:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:36.198 [2024-11-20 08:20:23.406979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.198 [2024-11-20 08:20:23.549114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.198 [2024-11-20 08:20:23.549117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.456 [2024-11-20 08:20:23.780973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.456 [2024-11-20 08:20:23.781069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.833 08:20:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:37.833 spdk_app_start Round 1 00:07:37.833 08:20:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:37.833 08:20:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58810 /var/tmp/spdk-nbd.sock 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 58810 ']' 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:37.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:37.833 08:20:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.092 08:20:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:38.092 08:20:25 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:07:38.092 08:20:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.352 Malloc0 00:07:38.352 08:20:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.612 Malloc1 00:07:38.612 08:20:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.612 08:20:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.871 /dev/nbd0 00:07:38.871 08:20:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.871 08:20:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.871 1+0 records in 00:07:38.871 1+0 records out 00:07:38.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388017 s, 10.6 MB/s 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:38.871 08:20:26 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:38.871 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.871 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.871 08:20:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.130 /dev/nbd1 00:07:39.130 08:20:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.130 08:20:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:39.130 08:20:26 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.130 1+0 records in 00:07:39.131 1+0 records out 00:07:39.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366827 s, 11.2 MB/s 00:07:39.131 08:20:26 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.131 08:20:26 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:39.131 08:20:26 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.131 08:20:26 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:39.131 08:20:26 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:39.131 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.131 08:20:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.131 08:20:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.131 08:20:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.131 08:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.390 { 00:07:39.390 "nbd_device": "/dev/nbd0", 00:07:39.390 "bdev_name": "Malloc0" 00:07:39.390 }, 00:07:39.390 { 00:07:39.390 "nbd_device": "/dev/nbd1", 00:07:39.390 "bdev_name": "Malloc1" 00:07:39.390 } 00:07:39.390 ]' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.390 { 00:07:39.390 "nbd_device": "/dev/nbd0", 00:07:39.390 "bdev_name": "Malloc0" 00:07:39.390 }, 00:07:39.390 { 00:07:39.390 "nbd_device": "/dev/nbd1", 00:07:39.390 "bdev_name": "Malloc1" 00:07:39.390 } 00:07:39.390 ]' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.390 /dev/nbd1' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.390 /dev/nbd1' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.390 256+0 records in 00:07:39.390 256+0 records out 00:07:39.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124286 s, 84.4 MB/s 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.390 256+0 records in 00:07:39.390 256+0 records out 00:07:39.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316338 s, 33.1 MB/s 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.390 256+0 records in 00:07:39.390 256+0 records out 00:07:39.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032766 s, 32.0 MB/s 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.390 08:20:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.649 08:20:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.649 08:20:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.649 08:20:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.649 08:20:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.650 08:20:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.909 08:20:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.168 08:20:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.168 08:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.168 08:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.168 08:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.168 08:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.426 08:20:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.426 08:20:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:40.683 08:20:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.061 [2024-11-20 08:20:29.458809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.061 [2024-11-20 08:20:29.595164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.061 [2024-11-20 08:20:29.595186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.320 [2024-11-20 08:20:29.826107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.320 [2024-11-20 08:20:29.826222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.698 spdk_app_start Round 2 00:07:43.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.698 08:20:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.698 08:20:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:43.698 08:20:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58810 /var/tmp/spdk-nbd.sock 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 58810 ']' 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:43.698 08:20:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.956 08:20:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:43.956 08:20:31 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:07:43.956 08:20:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.215 Malloc0 00:07:44.215 08:20:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.474 Malloc1 00:07:44.474 08:20:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.474 08:20:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.732 /dev/nbd0 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.732 08:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.732 08:20:32 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:07:44.732 08:20:32 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:44.732 08:20:32 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:44.732 08:20:32 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:44.732 08:20:32 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:07:44.733 08:20:32 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:44.733 08:20:32 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:44.733 08:20:32 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:44.733 08:20:32 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.733 1+0 records in 00:07:44.733 1+0 records out 00:07:44.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00298104 s, 1.4 MB/s 00:07:44.990 08:20:32 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.990 08:20:32 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:44.990 08:20:32 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.990 08:20:32 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:44.990 08:20:32 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:44.990 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.990 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.990 08:20:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.249 /dev/nbd1 00:07:45.249 08:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.249 08:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.249 1+0 records in 00:07:45.249 1+0 records out 00:07:45.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493219 s, 8.3 MB/s 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:07:45.249 08:20:32 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:07:45.249 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.249 08:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.249 08:20:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.250 08:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.250 08:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.508 08:20:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.508 { 00:07:45.508 "nbd_device": "/dev/nbd0", 00:07:45.508 "bdev_name": "Malloc0" 00:07:45.508 }, 00:07:45.508 { 00:07:45.508 "nbd_device": "/dev/nbd1", 00:07:45.508 "bdev_name": "Malloc1" 00:07:45.508 } 00:07:45.508 ]' 00:07:45.508 08:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.508 { 00:07:45.508 "nbd_device": "/dev/nbd0", 00:07:45.508 "bdev_name": "Malloc0" 00:07:45.508 }, 00:07:45.508 { 00:07:45.508 "nbd_device": "/dev/nbd1", 00:07:45.508 "bdev_name": "Malloc1" 00:07:45.508 } 00:07:45.508 ]' 00:07:45.508 08:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.767 /dev/nbd1' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.767 /dev/nbd1' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.767 256+0 records in 00:07:45.767 256+0 records out 00:07:45.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847961 s, 124 MB/s 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.767 256+0 records in 00:07:45.767 256+0 records out 00:07:45.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306508 s, 34.2 MB/s 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.767 256+0 records in 00:07:45.767 256+0 records out 00:07:45.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310568 s, 33.8 MB/s 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.767 08:20:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.043 08:20:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.301 08:20:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.559 08:20:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.559 08:20:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.559 08:20:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.817 08:20:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.817 08:20:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.075 08:20:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:48.450 [2024-11-20 08:20:35.859900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.450 [2024-11-20 08:20:35.984649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.450 [2024-11-20 08:20:35.984649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.710 [2024-11-20 08:20:36.209104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:48.710 [2024-11-20 08:20:36.209197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.087 08:20:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58810 /var/tmp/spdk-nbd.sock 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 58810 ']' 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:50.087 08:20:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:07:50.346 08:20:37 event.app_repeat -- event/event.sh@39 -- # killprocess 58810 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@957 -- # '[' -z 58810 ']' 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@961 -- # kill -0 58810 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@962 -- # uname 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58810 00:07:50.346 killing process with pid 58810 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58810' 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@976 -- # kill 58810 00:07:50.346 08:20:37 event.app_repeat -- common/autotest_common.sh@981 -- # wait 58810 00:07:51.736 spdk_app_start is called in Round 0. 00:07:51.736 Shutdown signal received, stop current app iteration 00:07:51.736 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:07:51.736 spdk_app_start is called in Round 1. 00:07:51.736 Shutdown signal received, stop current app iteration 00:07:51.736 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:07:51.736 spdk_app_start is called in Round 2. 00:07:51.736 Shutdown signal received, stop current app iteration 00:07:51.736 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:07:51.736 spdk_app_start is called in Round 3. 00:07:51.736 Shutdown signal received, stop current app iteration 00:07:51.737 08:20:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:51.737 08:20:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:51.737 00:07:51.737 real 0m20.619s 00:07:51.737 user 0m43.713s 00:07:51.737 sys 0m3.791s 00:07:51.737 ************************************ 00:07:51.737 08:20:39 event.app_repeat -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:51.737 08:20:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:51.737 END TEST app_repeat 00:07:51.737 ************************************ 00:07:51.737 08:20:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:51.737 08:20:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:51.737 08:20:39 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:51.737 08:20:39 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:51.737 08:20:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:51.737 ************************************ 00:07:51.737 START TEST cpu_locks 00:07:51.737 ************************************ 00:07:51.737 08:20:39 event.cpu_locks -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:51.737 * Looking for test storage... 00:07:51.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:51.737 08:20:39 event.cpu_locks -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:51.737 08:20:39 event.cpu_locks -- common/autotest_common.sh@1638 -- # lcov --version 00:07:51.737 08:20:39 event.cpu_locks -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.000 08:20:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:52.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.000 --rc genhtml_branch_coverage=1 00:07:52.000 --rc genhtml_function_coverage=1 00:07:52.000 --rc genhtml_legend=1 00:07:52.000 --rc geninfo_all_blocks=1 00:07:52.000 --rc geninfo_unexecuted_blocks=1 00:07:52.000 00:07:52.000 ' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:52.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.000 --rc genhtml_branch_coverage=1 00:07:52.000 --rc genhtml_function_coverage=1 00:07:52.000 --rc genhtml_legend=1 00:07:52.000 --rc geninfo_all_blocks=1 00:07:52.000 --rc geninfo_unexecuted_blocks=1 00:07:52.000 00:07:52.000 ' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:52.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.000 --rc genhtml_branch_coverage=1 00:07:52.000 --rc genhtml_function_coverage=1 00:07:52.000 --rc genhtml_legend=1 00:07:52.000 --rc geninfo_all_blocks=1 00:07:52.000 --rc geninfo_unexecuted_blocks=1 00:07:52.000 00:07:52.000 ' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:52.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.000 --rc genhtml_branch_coverage=1 00:07:52.000 --rc genhtml_function_coverage=1 00:07:52.000 --rc genhtml_legend=1 00:07:52.000 --rc geninfo_all_blocks=1 00:07:52.000 --rc geninfo_unexecuted_blocks=1 00:07:52.000 00:07:52.000 ' 00:07:52.000 08:20:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:52.000 08:20:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:52.000 08:20:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:52.000 08:20:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:52.000 08:20:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 ************************************ 00:07:52.000 START TEST default_locks 00:07:52.000 ************************************ 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1132 -- # default_locks 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59282 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59282 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # '[' -z 59282 ']' 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:52.000 08:20:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 [2024-11-20 08:20:39.472190] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:52.001 [2024-11-20 08:20:39.472313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:07:52.259 [2024-11-20 08:20:39.655967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.259 [2024-11-20 08:20:39.794324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.635 08:20:40 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:53.635 08:20:40 event.cpu_locks.default_locks -- common/autotest_common.sh@871 -- # return 0 00:07:53.635 08:20:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59282 00:07:53.635 08:20:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59282 00:07:53.635 08:20:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59282 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' -z 59282 ']' 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill -0 59282 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # uname 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59282 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:53.894 killing process with pid 59282 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59282' 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # kill 59282 00:07:53.894 08:20:41 event.cpu_locks.default_locks -- common/autotest_common.sh@981 -- # wait 59282 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59282 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # local es=0 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 59282 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@658 -- # waitforlisten 59282 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # '[' -z 59282 ']' 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:56.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 ERROR: process (pid: 59282) is no longer running 00:07:56.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (59282) - No such process 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@871 -- # return 1 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@658 -- # es=1 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:56.466 00:07:56.466 real 0m4.666s 00:07:56.466 user 0m4.470s 00:07:56.466 sys 0m0.885s 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:56.466 08:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 ************************************ 00:07:56.466 END TEST default_locks 00:07:56.466 ************************************ 00:07:56.723 08:20:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:56.723 08:20:44 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:56.723 08:20:44 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:56.723 08:20:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.723 ************************************ 00:07:56.723 START TEST default_locks_via_rpc 00:07:56.723 ************************************ 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1132 -- # default_locks_via_rpc 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59364 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59364 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 59364 ']' 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:56.723 08:20:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.723 [2024-11-20 08:20:44.213402] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:56.723 [2024-11-20 08:20:44.213525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:07:56.982 [2024-11-20 08:20:44.400149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.982 [2024-11-20 08:20:44.531089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.915 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:57.915 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59364 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59364 00:07:58.174 08:20:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59364 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' -z 59364 ']' 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill -0 59364 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # uname 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59364 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:58.741 killing process with pid 59364 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59364' 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # kill 59364 00:07:58.741 08:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@981 -- # wait 59364 00:08:01.272 00:08:01.272 real 0m4.686s 00:08:01.272 user 0m4.668s 00:08:01.272 sys 0m0.740s 00:08:01.272 08:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:01.272 ************************************ 00:08:01.272 END TEST default_locks_via_rpc 00:08:01.272 ************************************ 00:08:01.272 08:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.530 08:20:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:01.530 08:20:48 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:01.530 08:20:48 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:01.530 08:20:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.530 ************************************ 00:08:01.530 START TEST non_locking_app_on_locked_coremask 00:08:01.530 ************************************ 00:08:01.530 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1132 -- # non_locking_app_on_locked_coremask 00:08:01.530 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59444 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59444 /var/tmp/spdk.sock 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59444 ']' 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:01.531 08:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.531 [2024-11-20 08:20:48.981539] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:01.531 [2024-11-20 08:20:48.981704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:08:01.790 [2024-11-20 08:20:49.173635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.790 [2024-11-20 08:20:49.325677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59465 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59465 /var/tmp/spdk2.sock 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59465 ']' 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:03.196 08:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.196 [2024-11-20 08:20:50.479747] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:03.196 [2024-11-20 08:20:50.480150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59465 ] 00:08:03.196 [2024-11-20 08:20:50.672948] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.196 [2024-11-20 08:20:50.673020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.455 [2024-11-20 08:20:50.976401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.987 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:05.987 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:05.987 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59444 00:08:05.987 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59444 00:08:05.987 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59444 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 59444 ']' 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 59444 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59444 00:08:06.555 killing process with pid 59444 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59444' 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 59444 00:08:06.555 08:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 59444 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59465 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 59465 ']' 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 59465 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59465 00:08:11.829 killing process with pid 59465 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59465' 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 59465 00:08:11.829 08:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 59465 00:08:14.364 00:08:14.364 real 0m13.010s 00:08:14.364 user 0m12.987s 00:08:14.364 sys 0m1.829s 00:08:14.364 08:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:14.364 ************************************ 00:08:14.364 END TEST non_locking_app_on_locked_coremask 00:08:14.364 ************************************ 00:08:14.364 08:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.624 08:21:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:14.624 08:21:01 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:14.624 08:21:01 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:14.624 08:21:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.624 ************************************ 00:08:14.624 START TEST locking_app_on_unlocked_coremask 00:08:14.624 ************************************ 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1132 -- # locking_app_on_unlocked_coremask 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59627 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59627 /var/tmp/spdk.sock 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59627 ']' 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:14.624 08:21:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.624 [2024-11-20 08:21:02.064553] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:14.624 [2024-11-20 08:21:02.064686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:08:14.883 [2024-11-20 08:21:02.243696] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.883 [2024-11-20 08:21:02.243753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.883 [2024-11-20 08:21:02.375088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.260 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59649 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59649 /var/tmp/spdk2.sock 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59649 ']' 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:16.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:16.261 08:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.261 [2024-11-20 08:21:03.515455] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:16.261 [2024-11-20 08:21:03.515830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:08:16.261 [2024-11-20 08:21:03.702629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.519 [2024-11-20 08:21:03.991447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.049 08:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:19.049 08:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:19.049 08:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59649 00:08:19.049 08:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59649 00:08:19.049 08:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59627 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' -z 59627 ']' 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill -0 59627 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # uname 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59627 00:08:19.614 killing process with pid 59627 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59627' 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # kill 59627 00:08:19.614 08:21:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@981 -- # wait 59627 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59649 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' -z 59649 ']' 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill -0 59649 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # uname 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59649 00:08:26.209 killing process with pid 59649 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59649' 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # kill 59649 00:08:26.209 08:21:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@981 -- # wait 59649 00:08:27.588 ************************************ 00:08:27.588 END TEST locking_app_on_unlocked_coremask 00:08:27.588 ************************************ 00:08:27.588 00:08:27.588 real 0m13.175s 00:08:27.588 user 0m13.270s 00:08:27.588 sys 0m1.769s 00:08:27.588 08:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:27.588 08:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.847 08:21:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:27.847 08:21:15 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:27.847 08:21:15 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:27.847 08:21:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.847 ************************************ 00:08:27.847 START TEST locking_app_on_locked_coremask 00:08:27.847 ************************************ 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1132 -- # locking_app_on_locked_coremask 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59810 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59810 /var/tmp/spdk.sock 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59810 ']' 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:27.847 08:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.847 [2024-11-20 08:21:15.312912] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:27.847 [2024-11-20 08:21:15.313063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:08:28.106 [2024-11-20 08:21:15.491961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.106 [2024-11-20 08:21:15.622031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59832 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59832 /var/tmp/spdk2.sock 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # local es=0 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 59832 /var/tmp/spdk2.sock 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@658 -- # waitforlisten 59832 /var/tmp/spdk2.sock 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 59832 ']' 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.483 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:29.484 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.484 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:29.484 08:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.484 [2024-11-20 08:21:16.712863] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:29.484 [2024-11-20 08:21:16.713182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:08:29.484 [2024-11-20 08:21:16.898544] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59810 has claimed it. 00:08:29.484 [2024-11-20 08:21:16.898611] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:30.051 ERROR: process (pid: 59832) is no longer running 00:08:30.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (59832) - No such process 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 1 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@658 -- # es=1 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59810 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59810 00:08:30.051 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59810 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 59810 ']' 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 59810 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59810 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59810' 00:08:30.310 killing process with pid 59810 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 59810 00:08:30.310 08:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 59810 00:08:32.882 00:08:32.882 real 0m5.152s 00:08:32.882 user 0m5.158s 00:08:32.882 sys 0m0.984s 00:08:32.882 08:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:32.882 ************************************ 00:08:32.882 END TEST locking_app_on_locked_coremask 00:08:32.882 ************************************ 00:08:32.882 08:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.882 08:21:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:32.882 08:21:20 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:32.882 08:21:20 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:32.882 08:21:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.882 ************************************ 00:08:32.882 START TEST locking_overlapped_coremask 00:08:32.882 ************************************ 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1132 -- # locking_overlapped_coremask 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59901 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59901 /var/tmp/spdk.sock 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # '[' -z 59901 ']' 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:32.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:32.882 08:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 [2024-11-20 08:21:20.547042] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:33.145 [2024-11-20 08:21:20.547426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:08:33.404 [2024-11-20 08:21:20.735024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.404 [2024-11-20 08:21:20.868534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.404 [2024-11-20 08:21:20.868678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.404 [2024-11-20 08:21:20.868712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@871 -- # return 0 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59925 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59925 /var/tmp/spdk2.sock 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # local es=0 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 59925 /var/tmp/spdk2.sock 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@658 -- # waitforlisten 59925 /var/tmp/spdk2.sock 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # '[' -z 59925 ']' 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:34.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:34.341 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:34.342 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:34.342 08:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 [2024-11-20 08:21:21.980220] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:34.600 [2024-11-20 08:21:21.980346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:08:34.859 [2024-11-20 08:21:22.164473] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59901 has claimed it. 00:08:34.859 [2024-11-20 08:21:22.164534] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:35.118 ERROR: process (pid: 59925) is no longer running 00:08:35.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (59925) - No such process 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@871 -- # return 1 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@658 -- # es=1 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:35.118 08:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59901 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' -z 59901 ']' 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill -0 59901 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # uname 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59901 00:08:35.119 killing process with pid 59901 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59901' 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # kill 59901 00:08:35.119 08:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@981 -- # wait 59901 00:08:38.406 00:08:38.406 real 0m4.951s 00:08:38.406 user 0m13.304s 00:08:38.406 sys 0m0.743s 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.406 ************************************ 00:08:38.406 END TEST locking_overlapped_coremask 00:08:38.406 ************************************ 00:08:38.406 08:21:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:38.406 08:21:25 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:38.406 08:21:25 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:38.406 08:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.406 ************************************ 00:08:38.406 START TEST locking_overlapped_coremask_via_rpc 00:08:38.406 ************************************ 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1132 -- # locking_overlapped_coremask_via_rpc 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59994 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59994 /var/tmp/spdk.sock 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 59994 ']' 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:38.406 08:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.406 [2024-11-20 08:21:25.566083] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:38.406 [2024-11-20 08:21:25.566452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:08:38.406 [2024-11-20 08:21:25.744476] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:38.406 [2024-11-20 08:21:25.744732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.406 [2024-11-20 08:21:25.896403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.406 [2024-11-20 08:21:25.896501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.406 [2024-11-20 08:21:25.896520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60018 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60018 /var/tmp/spdk2.sock 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 60018 ']' 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:39.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:39.784 08:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 [2024-11-20 08:21:27.085784] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:39.784 [2024-11-20 08:21:27.086651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60018 ] 00:08:39.784 [2024-11-20 08:21:27.277353] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:39.784 [2024-11-20 08:21:27.277410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.051 [2024-11-20 08:21:27.534291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.051 [2024-11-20 08:21:27.538099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.051 [2024-11-20 08:21:27.538129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:42.595 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # local es=0 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@658 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 [2024-11-20 08:21:29.748194] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59994 has claimed it. 00:08:42.596 request: 00:08:42.596 { 00:08:42.596 "method": "framework_enable_cpumask_locks", 00:08:42.596 "req_id": 1 00:08:42.596 } 00:08:42.596 Got JSON-RPC error response 00:08:42.596 response: 00:08:42.596 { 00:08:42.596 "code": -32603, 00:08:42.596 "message": "Failed to claim CPU core: 2" 00:08:42.596 } 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@658 -- # es=1 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59994 /var/tmp/spdk.sock 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 59994 ']' 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:42.596 08:21:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:08:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60018 /var/tmp/spdk2.sock 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 60018 ']' 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:42.596 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.854 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:42.855 00:08:42.855 real 0m4.811s 00:08:42.855 user 0m1.488s 00:08:42.855 sys 0m0.225s 00:08:42.855 ************************************ 00:08:42.855 END TEST locking_overlapped_coremask_via_rpc 00:08:42.855 ************************************ 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:42.855 08:21:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.855 08:21:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:42.855 08:21:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59994 ]] 00:08:42.855 08:21:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59994 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 59994 ']' 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 59994 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@962 -- # uname 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59994 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:42.855 killing process with pid 59994 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59994' 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@976 -- # kill 59994 00:08:42.855 08:21:30 event.cpu_locks -- common/autotest_common.sh@981 -- # wait 59994 00:08:46.141 08:21:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60018 ]] 00:08:46.141 08:21:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60018 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 60018 ']' 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 60018 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@962 -- # uname 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 60018 00:08:46.141 killing process with pid 60018 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 60018' 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@976 -- # kill 60018 00:08:46.141 08:21:33 event.cpu_locks -- common/autotest_common.sh@981 -- # wait 60018 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:48.063 Process with pid 59994 is not found 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59994 ]] 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59994 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 59994 ']' 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 59994 00:08:48.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (59994) - No such process 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@984 -- # echo 'Process with pid 59994 is not found' 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60018 ]] 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60018 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 60018 ']' 00:08:48.063 Process with pid 60018 is not found 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 60018 00:08:48.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (60018) - No such process 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@984 -- # echo 'Process with pid 60018 is not found' 00:08:48.063 08:21:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:48.063 00:08:48.063 real 0m56.450s 00:08:48.063 user 1m34.089s 00:08:48.063 sys 0m8.653s 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:48.063 08:21:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 ************************************ 00:08:48.063 END TEST cpu_locks 00:08:48.063 ************************************ 00:08:48.063 ************************************ 00:08:48.063 END TEST event 00:08:48.063 ************************************ 00:08:48.063 00:08:48.063 real 1m29.454s 00:08:48.063 user 2m38.576s 00:08:48.063 sys 0m13.924s 00:08:48.064 08:21:35 event -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:48.064 08:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:08:48.321 08:21:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:48.321 08:21:35 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:48.321 08:21:35 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:48.321 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:08:48.321 ************************************ 00:08:48.321 START TEST thread 00:08:48.321 ************************************ 00:08:48.321 08:21:35 thread -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:48.321 * Looking for test storage... 00:08:48.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:48.321 08:21:35 thread -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:48.321 08:21:35 thread -- common/autotest_common.sh@1638 -- # lcov --version 00:08:48.321 08:21:35 thread -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:48.580 08:21:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.580 08:21:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.580 08:21:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.580 08:21:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.580 08:21:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.580 08:21:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.580 08:21:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.580 08:21:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.580 08:21:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.580 08:21:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.580 08:21:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.580 08:21:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:48.580 08:21:35 thread -- scripts/common.sh@345 -- # : 1 00:08:48.580 08:21:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.580 08:21:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.580 08:21:35 thread -- scripts/common.sh@365 -- # decimal 1 00:08:48.580 08:21:35 thread -- scripts/common.sh@353 -- # local d=1 00:08:48.580 08:21:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.580 08:21:35 thread -- scripts/common.sh@355 -- # echo 1 00:08:48.580 08:21:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.580 08:21:35 thread -- scripts/common.sh@366 -- # decimal 2 00:08:48.580 08:21:35 thread -- scripts/common.sh@353 -- # local d=2 00:08:48.580 08:21:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.580 08:21:35 thread -- scripts/common.sh@355 -- # echo 2 00:08:48.580 08:21:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.580 08:21:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.580 08:21:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.580 08:21:35 thread -- scripts/common.sh@368 -- # return 0 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:48.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.580 --rc genhtml_branch_coverage=1 00:08:48.580 --rc genhtml_function_coverage=1 00:08:48.580 --rc genhtml_legend=1 00:08:48.580 --rc geninfo_all_blocks=1 00:08:48.580 --rc geninfo_unexecuted_blocks=1 00:08:48.580 00:08:48.580 ' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:48.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.580 --rc genhtml_branch_coverage=1 00:08:48.580 --rc genhtml_function_coverage=1 00:08:48.580 --rc genhtml_legend=1 00:08:48.580 --rc geninfo_all_blocks=1 00:08:48.580 --rc geninfo_unexecuted_blocks=1 00:08:48.580 00:08:48.580 ' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:48.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.580 --rc genhtml_branch_coverage=1 00:08:48.580 --rc genhtml_function_coverage=1 00:08:48.580 --rc genhtml_legend=1 00:08:48.580 --rc geninfo_all_blocks=1 00:08:48.580 --rc geninfo_unexecuted_blocks=1 00:08:48.580 00:08:48.580 ' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:48.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.580 --rc genhtml_branch_coverage=1 00:08:48.580 --rc genhtml_function_coverage=1 00:08:48.580 --rc genhtml_legend=1 00:08:48.580 --rc geninfo_all_blocks=1 00:08:48.580 --rc geninfo_unexecuted_blocks=1 00:08:48.580 00:08:48.580 ' 00:08:48.580 08:21:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1108 -- # '[' 8 -le 1 ']' 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:48.580 08:21:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.580 ************************************ 00:08:48.580 START TEST thread_poller_perf 00:08:48.580 ************************************ 00:08:48.580 08:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:48.580 [2024-11-20 08:21:36.006541] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:48.580 [2024-11-20 08:21:36.006801] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:08:48.839 [2024-11-20 08:21:36.191646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.839 [2024-11-20 08:21:36.318166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.839 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:50.216 [2024-11-20T08:21:37.777Z] ====================================== 00:08:50.216 [2024-11-20T08:21:37.777Z] busy:2502048480 (cyc) 00:08:50.216 [2024-11-20T08:21:37.777Z] total_run_count: 374000 00:08:50.216 [2024-11-20T08:21:37.777Z] tsc_hz: 2490000000 (cyc) 00:08:50.216 [2024-11-20T08:21:37.777Z] ====================================== 00:08:50.216 [2024-11-20T08:21:37.777Z] poller_cost: 6689 (cyc), 2686 (nsec) 00:08:50.216 00:08:50.216 real 0m1.615s 00:08:50.216 user 0m1.386s 00:08:50.216 sys 0m0.119s 00:08:50.216 ************************************ 00:08:50.216 END TEST thread_poller_perf 00:08:50.216 ************************************ 00:08:50.216 08:21:37 thread.thread_poller_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:50.216 08:21:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:50.216 08:21:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.216 08:21:37 thread -- common/autotest_common.sh@1108 -- # '[' 8 -le 1 ']' 00:08:50.216 08:21:37 thread -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:50.216 08:21:37 thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.216 ************************************ 00:08:50.216 START TEST thread_poller_perf 00:08:50.216 ************************************ 00:08:50.216 08:21:37 thread.thread_poller_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.216 [2024-11-20 08:21:37.705425] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:50.216 [2024-11-20 08:21:37.705542] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:08:50.474 [2024-11-20 08:21:37.891656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.732 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:50.732 [2024-11-20 08:21:38.034388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.108 [2024-11-20T08:21:39.669Z] ====================================== 00:08:52.108 [2024-11-20T08:21:39.669Z] busy:2494269950 (cyc) 00:08:52.108 [2024-11-20T08:21:39.669Z] total_run_count: 5121000 00:08:52.108 [2024-11-20T08:21:39.669Z] tsc_hz: 2490000000 (cyc) 00:08:52.108 [2024-11-20T08:21:39.669Z] ====================================== 00:08:52.108 [2024-11-20T08:21:39.669Z] poller_cost: 487 (cyc), 195 (nsec) 00:08:52.108 00:08:52.108 real 0m1.638s 00:08:52.108 user 0m1.401s 00:08:52.108 sys 0m0.129s 00:08:52.108 08:21:39 thread.thread_poller_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:52.108 ************************************ 00:08:52.108 END TEST thread_poller_perf 00:08:52.108 ************************************ 00:08:52.108 08:21:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:52.108 08:21:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:52.108 ************************************ 00:08:52.108 END TEST thread 00:08:52.108 ************************************ 00:08:52.108 00:08:52.108 real 0m3.690s 00:08:52.108 user 0m2.968s 00:08:52.108 sys 0m0.505s 00:08:52.108 08:21:39 thread -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:52.108 08:21:39 thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.108 08:21:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:52.108 08:21:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:52.108 08:21:39 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:52.108 08:21:39 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:52.108 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:08:52.108 ************************************ 00:08:52.108 START TEST app_cmdline 00:08:52.108 ************************************ 00:08:52.108 08:21:39 app_cmdline -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:52.108 * Looking for test storage... 00:08:52.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:52.108 08:21:39 app_cmdline -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:52.108 08:21:39 app_cmdline -- common/autotest_common.sh@1638 -- # lcov --version 00:08:52.108 08:21:39 app_cmdline -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:52.366 08:21:39 app_cmdline -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:52.366 08:21:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.366 08:21:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.366 08:21:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.366 08:21:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.366 08:21:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.367 08:21:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:52.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.367 --rc genhtml_branch_coverage=1 00:08:52.367 --rc genhtml_function_coverage=1 00:08:52.367 --rc genhtml_legend=1 00:08:52.367 --rc geninfo_all_blocks=1 00:08:52.367 --rc geninfo_unexecuted_blocks=1 00:08:52.367 00:08:52.367 ' 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:52.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.367 --rc genhtml_branch_coverage=1 00:08:52.367 --rc genhtml_function_coverage=1 00:08:52.367 --rc genhtml_legend=1 00:08:52.367 --rc geninfo_all_blocks=1 00:08:52.367 --rc geninfo_unexecuted_blocks=1 00:08:52.367 00:08:52.367 ' 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:52.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.367 --rc genhtml_branch_coverage=1 00:08:52.367 --rc genhtml_function_coverage=1 00:08:52.367 --rc genhtml_legend=1 00:08:52.367 --rc geninfo_all_blocks=1 00:08:52.367 --rc geninfo_unexecuted_blocks=1 00:08:52.367 00:08:52.367 ' 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:52.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.367 --rc genhtml_branch_coverage=1 00:08:52.367 --rc genhtml_function_coverage=1 00:08:52.367 --rc genhtml_legend=1 00:08:52.367 --rc geninfo_all_blocks=1 00:08:52.367 --rc geninfo_unexecuted_blocks=1 00:08:52.367 00:08:52.367 ' 00:08:52.367 08:21:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:52.367 08:21:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60356 00:08:52.367 08:21:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:52.367 08:21:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60356 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@838 -- # '[' -z 60356 ']' 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:52.367 08:21:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:52.367 [2024-11-20 08:21:39.819337] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:52.367 [2024-11-20 08:21:39.819704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60356 ] 00:08:52.626 [2024-11-20 08:21:40.009374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.626 [2024-11-20 08:21:40.155406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@871 -- # return 0 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:54.003 { 00:08:54.003 "version": "SPDK v25.01-pre git sha1 717acfa62", 00:08:54.003 "fields": { 00:08:54.003 "major": 25, 00:08:54.003 "minor": 1, 00:08:54.003 "patch": 0, 00:08:54.003 "suffix": "-pre", 00:08:54.003 "commit": "717acfa62" 00:08:54.003 } 00:08:54.003 } 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:54.003 08:21:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@655 -- # local es=0 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:54.003 08:21:41 app_cmdline -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:54.262 request: 00:08:54.262 { 00:08:54.262 "method": "env_dpdk_get_mem_stats", 00:08:54.262 "req_id": 1 00:08:54.262 } 00:08:54.262 Got JSON-RPC error response 00:08:54.262 response: 00:08:54.262 { 00:08:54.262 "code": -32601, 00:08:54.262 "message": "Method not found" 00:08:54.262 } 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@658 -- # es=1 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:08:54.262 08:21:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60356 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@957 -- # '[' -z 60356 ']' 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@961 -- # kill -0 60356 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@962 -- # uname 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 60356 00:08:54.262 killing process with pid 60356 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@975 -- # echo 'killing process with pid 60356' 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@976 -- # kill 60356 00:08:54.262 08:21:41 app_cmdline -- common/autotest_common.sh@981 -- # wait 60356 00:08:56.796 00:08:56.796 real 0m4.867s 00:08:56.796 user 0m4.912s 00:08:56.796 sys 0m0.842s 00:08:56.796 08:21:44 app_cmdline -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:56.797 ************************************ 00:08:56.797 END TEST app_cmdline 00:08:56.797 ************************************ 00:08:56.797 08:21:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:57.055 08:21:44 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:57.055 08:21:44 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:08:57.055 08:21:44 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:57.055 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:08:57.055 ************************************ 00:08:57.055 START TEST version 00:08:57.055 ************************************ 00:08:57.055 08:21:44 version -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:57.055 * Looking for test storage... 00:08:57.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:57.055 08:21:44 version -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:57.055 08:21:44 version -- common/autotest_common.sh@1638 -- # lcov --version 00:08:57.055 08:21:44 version -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:57.315 08:21:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.315 08:21:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.315 08:21:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.315 08:21:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.315 08:21:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.315 08:21:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.315 08:21:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.315 08:21:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.315 08:21:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.315 08:21:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.315 08:21:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.315 08:21:44 version -- scripts/common.sh@344 -- # case "$op" in 00:08:57.315 08:21:44 version -- scripts/common.sh@345 -- # : 1 00:08:57.315 08:21:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.315 08:21:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.315 08:21:44 version -- scripts/common.sh@365 -- # decimal 1 00:08:57.315 08:21:44 version -- scripts/common.sh@353 -- # local d=1 00:08:57.315 08:21:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.315 08:21:44 version -- scripts/common.sh@355 -- # echo 1 00:08:57.315 08:21:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.315 08:21:44 version -- scripts/common.sh@366 -- # decimal 2 00:08:57.315 08:21:44 version -- scripts/common.sh@353 -- # local d=2 00:08:57.315 08:21:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.315 08:21:44 version -- scripts/common.sh@355 -- # echo 2 00:08:57.315 08:21:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.315 08:21:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.315 08:21:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.315 08:21:44 version -- scripts/common.sh@368 -- # return 0 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.315 --rc genhtml_branch_coverage=1 00:08:57.315 --rc genhtml_function_coverage=1 00:08:57.315 --rc genhtml_legend=1 00:08:57.315 --rc geninfo_all_blocks=1 00:08:57.315 --rc geninfo_unexecuted_blocks=1 00:08:57.315 00:08:57.315 ' 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.315 --rc genhtml_branch_coverage=1 00:08:57.315 --rc genhtml_function_coverage=1 00:08:57.315 --rc genhtml_legend=1 00:08:57.315 --rc geninfo_all_blocks=1 00:08:57.315 --rc geninfo_unexecuted_blocks=1 00:08:57.315 00:08:57.315 ' 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.315 --rc genhtml_branch_coverage=1 00:08:57.315 --rc genhtml_function_coverage=1 00:08:57.315 --rc genhtml_legend=1 00:08:57.315 --rc geninfo_all_blocks=1 00:08:57.315 --rc geninfo_unexecuted_blocks=1 00:08:57.315 00:08:57.315 ' 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.315 --rc genhtml_branch_coverage=1 00:08:57.315 --rc genhtml_function_coverage=1 00:08:57.315 --rc genhtml_legend=1 00:08:57.315 --rc geninfo_all_blocks=1 00:08:57.315 --rc geninfo_unexecuted_blocks=1 00:08:57.315 00:08:57.315 ' 00:08:57.315 08:21:44 version -- app/version.sh@17 -- # get_header_version major 00:08:57.315 08:21:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # cut -f2 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.315 08:21:44 version -- app/version.sh@17 -- # major=25 00:08:57.315 08:21:44 version -- app/version.sh@18 -- # get_header_version minor 00:08:57.315 08:21:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # cut -f2 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.315 08:21:44 version -- app/version.sh@18 -- # minor=1 00:08:57.315 08:21:44 version -- app/version.sh@19 -- # get_header_version patch 00:08:57.315 08:21:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # cut -f2 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.315 08:21:44 version -- app/version.sh@19 -- # patch=0 00:08:57.315 08:21:44 version -- app/version.sh@20 -- # get_header_version suffix 00:08:57.315 08:21:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # cut -f2 00:08:57.315 08:21:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.315 08:21:44 version -- app/version.sh@20 -- # suffix=-pre 00:08:57.315 08:21:44 version -- app/version.sh@22 -- # version=25.1 00:08:57.315 08:21:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:57.315 08:21:44 version -- app/version.sh@28 -- # version=25.1rc0 00:08:57.315 08:21:44 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:57.315 08:21:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:57.315 08:21:44 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:57.315 08:21:44 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:57.315 ************************************ 00:08:57.315 END TEST version 00:08:57.315 ************************************ 00:08:57.315 00:08:57.315 real 0m0.374s 00:08:57.315 user 0m0.207s 00:08:57.315 sys 0m0.233s 00:08:57.315 08:21:44 version -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:57.315 08:21:44 version -- common/autotest_common.sh@10 -- # set +x 00:08:57.315 08:21:44 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:57.315 08:21:44 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:57.315 08:21:44 -- spdk/autotest.sh@194 -- # uname -s 00:08:57.315 08:21:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:57.315 08:21:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:57.315 08:21:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:57.316 08:21:44 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:57.316 08:21:44 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:57.316 08:21:44 -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:08:57.316 08:21:44 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:57.316 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:08:57.316 ************************************ 00:08:57.316 START TEST blockdev_nvme 00:08:57.316 ************************************ 00:08:57.316 08:21:44 blockdev_nvme -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:57.575 * Looking for test storage... 00:08:57.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1638 -- # lcov --version 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.575 08:21:45 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:57.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.575 --rc genhtml_branch_coverage=1 00:08:57.575 --rc genhtml_function_coverage=1 00:08:57.575 --rc genhtml_legend=1 00:08:57.575 --rc geninfo_all_blocks=1 00:08:57.575 --rc geninfo_unexecuted_blocks=1 00:08:57.575 00:08:57.575 ' 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:57.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.575 --rc genhtml_branch_coverage=1 00:08:57.575 --rc genhtml_function_coverage=1 00:08:57.575 --rc genhtml_legend=1 00:08:57.575 --rc geninfo_all_blocks=1 00:08:57.575 --rc geninfo_unexecuted_blocks=1 00:08:57.575 00:08:57.575 ' 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:57.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.575 --rc genhtml_branch_coverage=1 00:08:57.575 --rc genhtml_function_coverage=1 00:08:57.575 --rc genhtml_legend=1 00:08:57.575 --rc geninfo_all_blocks=1 00:08:57.575 --rc geninfo_unexecuted_blocks=1 00:08:57.575 00:08:57.575 ' 00:08:57.575 08:21:45 blockdev_nvme -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:57.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.575 --rc genhtml_branch_coverage=1 00:08:57.575 --rc genhtml_function_coverage=1 00:08:57.575 --rc genhtml_legend=1 00:08:57.575 --rc geninfo_all_blocks=1 00:08:57.575 --rc geninfo_unexecuted_blocks=1 00:08:57.575 00:08:57.575 ' 00:08:57.575 08:21:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:57.575 08:21:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:57.575 08:21:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:57.575 08:21:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60562 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60562 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@838 -- # '[' -z 60562 ']' 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:57.576 08:21:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.576 08:21:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:57.835 [2024-11-20 08:21:45.262282] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:57.835 [2024-11-20 08:21:45.262641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:08:58.094 [2024-11-20 08:21:45.451759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.094 [2024-11-20 08:21:45.596196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.030 08:21:46 blockdev_nvme -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:59.030 08:21:46 blockdev_nvme -- common/autotest_common.sh@871 -- # return 0 00:08:59.030 08:21:46 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:59.030 08:21:46 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:59.030 08:21:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:59.030 08:21:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:59.030 08:21:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:59.289 08:21:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:59.289 08:21:46 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.289 08:21:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 08:21:46 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.549 08:21:46 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:59.549 08:21:46 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.549 08:21:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 08:21:46 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.549 08:21:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:59.549 08:21:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.549 08:21:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.549 08:21:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.549 08:21:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:59.549 08:21:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:59.549 08:21:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:59.549 08:21:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.809 08:21:47 blockdev_nvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:59.809 08:21:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:59.809 08:21:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:59.810 08:21:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "14f62a33-9735-46c2-b221-f5ca3ae70289"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "14f62a33-9735-46c2-b221-f5ca3ae70289",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8b6f6893-6eb5-4476-b1c5-cdb37285b0ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8b6f6893-6eb5-4476-b1c5-cdb37285b0ea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bcd04eb1-0d87-49e2-8c79-0048c7c5acf6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bcd04eb1-0d87-49e2-8c79-0048c7c5acf6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0484dd94-7a1c-4c54-8291-3c463b95b81d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0484dd94-7a1c-4c54-8291-3c463b95b81d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "67568bc5-ccf4-49da-b86a-92c24ab42467"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67568bc5-ccf4-49da-b86a-92c24ab42467",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d111f600-e558-47e9-9091-fba168853b69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d111f600-e558-47e9-9091-fba168853b69",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:59.810 08:21:47 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:59.810 08:21:47 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:59.810 08:21:47 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:59.810 08:21:47 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60562 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' -z 60562 ']' 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@961 -- # kill -0 60562 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@962 -- # uname 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 60562 00:08:59.810 killing process with pid 60562 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@975 -- # echo 'killing process with pid 60562' 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@976 -- # kill 60562 00:08:59.810 08:21:47 blockdev_nvme -- common/autotest_common.sh@981 -- # wait 60562 00:09:03.098 08:21:49 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:03.098 08:21:49 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:03.098 08:21:49 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 7 -le 1 ']' 00:09:03.098 08:21:49 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:03.098 08:21:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.098 ************************************ 00:09:03.098 START TEST bdev_hello_world 00:09:03.098 ************************************ 00:09:03.098 08:21:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:03.098 [2024-11-20 08:21:50.021561] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:03.098 [2024-11-20 08:21:50.021957] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:09:03.098 [2024-11-20 08:21:50.208951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.098 [2024-11-20 08:21:50.362171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.664 [2024-11-20 08:21:51.099354] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:03.664 [2024-11-20 08:21:51.099651] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:03.664 [2024-11-20 08:21:51.099687] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:03.664 [2024-11-20 08:21:51.103174] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:03.664 [2024-11-20 08:21:51.103519] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:03.664 [2024-11-20 08:21:51.103557] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:03.664 [2024-11-20 08:21:51.103669] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:03.664 00:09:03.664 [2024-11-20 08:21:51.103699] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:05.037 00:09:05.037 real 0m2.472s 00:09:05.037 user 0m2.039s 00:09:05.037 sys 0m0.323s 00:09:05.037 08:21:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:05.037 ************************************ 00:09:05.037 END TEST bdev_hello_world 00:09:05.037 ************************************ 00:09:05.037 08:21:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:05.037 08:21:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:05.037 08:21:52 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:05.037 08:21:52 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:05.037 08:21:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.037 ************************************ 00:09:05.037 START TEST bdev_bounds 00:09:05.037 ************************************ 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1132 -- # bdev_bounds '' 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60710 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:05.037 Process bdevio pid: 60710 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60710' 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60710 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # '[' -z 60710 ']' 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:05.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:05.037 08:21:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:05.037 [2024-11-20 08:21:52.558027] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:05.037 [2024-11-20 08:21:52.558176] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:09:05.296 [2024-11-20 08:21:52.748984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.594 [2024-11-20 08:21:52.900485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.594 [2024-11-20 08:21:52.900650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.594 [2024-11-20 08:21:52.900690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.171 08:21:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:06.171 08:21:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@871 -- # return 0 00:09:06.171 08:21:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:06.430 I/O targets: 00:09:06.430 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:06.430 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:06.431 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.431 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.431 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.431 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:06.431 00:09:06.431 00:09:06.431 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.431 http://cunit.sourceforge.net/ 00:09:06.431 00:09:06.431 00:09:06.431 Suite: bdevio tests on: Nvme3n1 00:09:06.431 Test: blockdev write read block ...passed 00:09:06.431 Test: blockdev write zeroes read block ...passed 00:09:06.431 Test: blockdev write zeroes read no split ...passed 00:09:06.431 Test: blockdev write zeroes read split ...passed 00:09:06.431 Test: blockdev write zeroes read split partial ...passed 00:09:06.431 Test: blockdev reset ...[2024-11-20 08:21:53.849509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:06.431 [2024-11-20 08:21:53.854061] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:06.431 passed 00:09:06.431 Test: blockdev write read 8 blocks ...passed 00:09:06.431 Test: blockdev write read size > 128k ...passed 00:09:06.431 Test: blockdev write read invalid size ...passed 00:09:06.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.431 Test: blockdev write read max offset ...passed 00:09:06.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.431 Test: blockdev writev readv 8 blocks ...passed 00:09:06.431 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.431 Test: blockdev writev readv block ...passed 00:09:06.431 Test: blockdev writev readv size > 128k ...passed 00:09:06.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.431 Test: blockdev comparev and writev ...[2024-11-20 08:21:53.865939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:06.431 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x242c0a000 len:0x1000 00:09:06.431 [2024-11-20 08:21:53.866252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.431 passed 00:09:06.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:53.867349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.431 passed 00:09:06.431 Test: blockdev nvme admin passthru ...[2024-11-20 08:21:53.867561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.431 passed 00:09:06.431 Test: blockdev copy ...passed 00:09:06.431 Suite: bdevio tests on: Nvme2n3 00:09:06.431 Test: blockdev write read block ...passed 00:09:06.431 Test: blockdev write zeroes read block ...passed 00:09:06.431 Test: blockdev write zeroes read no split ...passed 00:09:06.431 Test: blockdev write zeroes read split ...passed 00:09:06.431 Test: blockdev write zeroes read split partial ...passed 00:09:06.431 Test: blockdev reset ...[2024-11-20 08:21:53.970960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:06.431 [2024-11-20 08:21:53.975546] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:06.431 Test: blockdev write read 8 blocks ...uccessful. 00:09:06.431 passed 00:09:06.431 Test: blockdev write read size > 128k ...passed 00:09:06.431 Test: blockdev write read invalid size ...passed 00:09:06.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.431 Test: blockdev write read max offset ...passed 00:09:06.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.431 Test: blockdev writev readv 8 blocks ...passed 00:09:06.431 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.431 Test: blockdev writev readv block ...passed 00:09:06.431 Test: blockdev writev readv size > 128k ...passed 00:09:06.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.431 Test: blockdev comparev and writev ...[2024-11-20 08:21:53.985286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x25f606000 len:0x1000 00:09:06.431 [2024-11-20 08:21:53.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.431 passed 00:09:06.431 Test: blockdev nvme passthru rw ...passed 00:09:06.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:53.986985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.431 [2024-11-20 08:21:53.987122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.431 passed 00:09:06.690 Test: blockdev nvme admin passthru ...passed 00:09:06.690 Test: blockdev copy ...passed 00:09:06.690 Suite: bdevio tests on: Nvme2n2 00:09:06.690 Test: blockdev write read block ...passed 00:09:06.690 Test: blockdev write zeroes read block ...passed 00:09:06.690 Test: blockdev write zeroes read no split ...passed 00:09:06.690 Test: blockdev write zeroes read split ...passed 00:09:06.690 Test: blockdev write zeroes read split partial ...passed 00:09:06.690 Test: blockdev reset ...[2024-11-20 08:21:54.093001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:06.690 [2024-11-20 08:21:54.097639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:06.690 passed 00:09:06.690 Test: blockdev write read 8 blocks ...passed 00:09:06.690 Test: blockdev write read size > 128k ...passed 00:09:06.690 Test: blockdev write read invalid size ...passed 00:09:06.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.690 Test: blockdev write read max offset ...passed 00:09:06.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.690 Test: blockdev writev readv 8 blocks ...passed 00:09:06.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.690 Test: blockdev writev readv block ...passed 00:09:06.690 Test: blockdev writev readv size > 128k ...passed 00:09:06.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.690 Test: blockdev comparev and writev ...[2024-11-20 08:21:54.108629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x22743c000 len:0x1000 00:09:06.690 [2024-11-20 08:21:54.108978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.690 passed 00:09:06.690 Test: blockdev nvme passthru rw ...passed 00:09:06.690 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:54.110286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.690 [2024-11-20 08:21:54.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.690 passed 00:09:06.690 Test: blockdev nvme admin passthru ...passed 00:09:06.690 Test: blockdev copy ...passed 00:09:06.690 Suite: bdevio tests on: Nvme2n1 00:09:06.690 Test: blockdev write read block ...passed 00:09:06.690 Test: blockdev write zeroes read block ...passed 00:09:06.690 Test: blockdev write zeroes read no split ...passed 00:09:06.690 Test: blockdev write zeroes read split ...passed 00:09:06.690 Test: blockdev write zeroes read split partial ...passed 00:09:06.690 Test: blockdev reset ...[2024-11-20 08:21:54.220191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:06.690 passed 00:09:06.690 Test: blockdev write read 8 blocks ...[2024-11-20 08:21:54.224701] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:06.690 passed 00:09:06.690 Test: blockdev write read size > 128k ...passed 00:09:06.690 Test: blockdev write read invalid size ...passed 00:09:06.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.690 Test: blockdev write read max offset ...passed 00:09:06.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.690 Test: blockdev writev readv 8 blocks ...passed 00:09:06.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.690 Test: blockdev writev readv block ...passed 00:09:06.690 Test: blockdev writev readv size > 128k ...passed 00:09:06.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.690 Test: blockdev comparev and writev ...[2024-11-20 08:21:54.234814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x227438000 len:0x1000 00:09:06.690 passed 00:09:06.690 Test: blockdev nvme passthru rw ...[2024-11-20 08:21:54.235064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.690 passed 00:09:06.690 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:54.236290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.690 passed 00:09:06.690 Test: blockdev nvme admin passthru ...[2024-11-20 08:21:54.236491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.690 passed 00:09:06.690 Test: blockdev copy ...passed 00:09:06.690 Suite: bdevio tests on: Nvme1n1 00:09:06.690 Test: blockdev write read block ...passed 00:09:06.949 Test: blockdev write zeroes read block ...passed 00:09:06.949 Test: blockdev write zeroes read no split ...passed 00:09:06.949 Test: blockdev write zeroes read split ...passed 00:09:06.949 Test: blockdev write zeroes read split partial ...passed 00:09:06.949 Test: blockdev reset ...[2024-11-20 08:21:54.367636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:06.949 [2024-11-20 08:21:54.371785] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:06.949 passed 00:09:06.949 Test: blockdev write read 8 blocks ...passed 00:09:06.949 Test: blockdev write read size > 128k ...passed 00:09:06.949 Test: blockdev write read invalid size ...passed 00:09:06.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.949 Test: blockdev write read max offset ...passed 00:09:06.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.949 Test: blockdev writev readv 8 blocks ...passed 00:09:06.949 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.949 Test: blockdev writev readv block ...passed 00:09:06.949 Test: blockdev writev readv size > 128k ...passed 00:09:06.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.949 Test: blockdev comparev and writev ...[2024-11-20 08:21:54.382129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x227434000 len:0x1000 00:09:06.949 [2024-11-20 08:21:54.382419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.949 passed 00:09:06.949 Test: blockdev nvme passthru rw ...passed 00:09:06.949 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:54.383975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.949 [2024-11-20 08:21:54.384207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:06.949 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:06.949 passed 00:09:06.949 Test: blockdev copy ...passed 00:09:06.949 Suite: bdevio tests on: Nvme0n1 00:09:06.949 Test: blockdev write read block ...passed 00:09:06.949 Test: blockdev write zeroes read block ...passed 00:09:06.949 Test: blockdev write zeroes read no split ...passed 00:09:07.208 Test: blockdev write zeroes read split ...passed 00:09:07.208 Test: blockdev write zeroes read split partial ...passed 00:09:07.208 Test: blockdev reset ...[2024-11-20 08:21:54.559368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:07.208 [2024-11-20 08:21:54.563516] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:09:07.208 00:09:07.208 Test: blockdev write read 8 blocks ...passed 00:09:07.208 Test: blockdev write read size > 128k ...passed 00:09:07.208 Test: blockdev write read invalid size ...passed 00:09:07.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.208 Test: blockdev write read max offset ...passed 00:09:07.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.208 Test: blockdev writev readv 8 blocks ...passed 00:09:07.208 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.208 Test: blockdev writev readv block ...passed 00:09:07.208 Test: blockdev writev readv size > 128k ...passed 00:09:07.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.208 Test: blockdev comparev and writev ...passed 00:09:07.208 Test: blockdev nvme passthru rw ...[2024-11-20 08:21:54.573359] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:07.208 separate metadata which is not supported yet. 00:09:07.208 passed 00:09:07.208 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:54.574172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:07.208 passed 00:09:07.208 Test: blockdev nvme admin passthru ...[2024-11-20 08:21:54.574456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:07.208 passed 00:09:07.208 Test: blockdev copy ...passed 00:09:07.208 00:09:07.208 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.208 suites 6 6 n/a 0 0 00:09:07.208 tests 138 138 138 0 0 00:09:07.208 asserts 893 893 893 0 n/a 00:09:07.208 00:09:07.208 Elapsed time = 2.084 seconds 00:09:07.208 0 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60710 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' -z 60710 ']' 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@961 -- # kill -0 60710 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # uname 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 60710 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@975 -- # echo 'killing process with pid 60710' 00:09:07.208 killing process with pid 60710 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # kill 60710 00:09:07.208 08:21:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@981 -- # wait 60710 00:09:09.112 08:21:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:09.112 00:09:09.112 real 0m4.060s 00:09:09.112 user 0m10.185s 00:09:09.112 sys 0m0.554s 00:09:09.112 08:21:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:09.112 08:21:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:09.112 ************************************ 00:09:09.112 END TEST bdev_bounds 00:09:09.112 ************************************ 00:09:09.112 08:21:56 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:09.112 08:21:56 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:09:09.112 08:21:56 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:09.112 08:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.112 ************************************ 00:09:09.112 START TEST bdev_nbd 00:09:09.112 ************************************ 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1132 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60786 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60786 /var/tmp/spdk-nbd.sock 00:09:09.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # '[' -z 60786 ']' 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:09.112 08:21:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:09.371 [2024-11-20 08:21:56.701107] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:09.371 [2024-11-20 08:21:56.701249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.371 [2024-11-20 08:21:56.891154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.630 [2024-11-20 08:21:57.032324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # return 0 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:10.567 08:21:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.567 1+0 records in 00:09:10.567 1+0 records out 00:09:10.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661856 s, 6.2 MB/s 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:10.567 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.826 1+0 records in 00:09:10.826 1+0 records out 00:09:10.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727887 s, 5.6 MB/s 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:10.826 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd2 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:11.084 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd2 /proc/partitions 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.085 1+0 records in 00:09:11.085 1+0 records out 00:09:11.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664793 s, 6.2 MB/s 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:11.085 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd3 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd3 /proc/partitions 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.343 1+0 records in 00:09:11.343 1+0 records out 00:09:11.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823911 s, 5.0 MB/s 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:11.343 08:21:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd4 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd4 /proc/partitions 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.603 1+0 records in 00:09:11.603 1+0 records out 00:09:11.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753297 s, 5.4 MB/s 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:11.603 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd5 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:11.861 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd5 /proc/partitions 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.862 1+0 records in 00:09:11.862 1+0 records out 00:09:11.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830122 s, 4.9 MB/s 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:11.862 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd0", 00:09:12.120 "bdev_name": "Nvme0n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd1", 00:09:12.120 "bdev_name": "Nvme1n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd2", 00:09:12.120 "bdev_name": "Nvme2n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd3", 00:09:12.120 "bdev_name": "Nvme2n2" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd4", 00:09:12.120 "bdev_name": "Nvme2n3" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd5", 00:09:12.120 "bdev_name": "Nvme3n1" 00:09:12.120 } 00:09:12.120 ]' 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd0", 00:09:12.120 "bdev_name": "Nvme0n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd1", 00:09:12.120 "bdev_name": "Nvme1n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd2", 00:09:12.120 "bdev_name": "Nvme2n1" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd3", 00:09:12.120 "bdev_name": "Nvme2n2" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd4", 00:09:12.120 "bdev_name": "Nvme2n3" 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "nbd_device": "/dev/nbd5", 00:09:12.120 "bdev_name": "Nvme3n1" 00:09:12.120 } 00:09:12.120 ]' 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.120 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.380 08:21:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.639 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.898 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.157 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.415 08:22:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.778 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:13.779 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:14.057 /dev/nbd0 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.057 1+0 records in 00:09:14.057 1+0 records out 00:09:14.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749229 s, 5.5 MB/s 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:14.057 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:14.316 /dev/nbd1 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:09:14.316 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.317 1+0 records in 00:09:14.317 1+0 records out 00:09:14.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579436 s, 7.1 MB/s 00:09:14.317 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:14.576 08:22:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:14.576 /dev/nbd10 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd10 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd10 /proc/partitions 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.835 1+0 records in 00:09:14.835 1+0 records out 00:09:14.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773941 s, 5.3 MB/s 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:14.835 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:14.835 /dev/nbd11 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd11 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd11 /proc/partitions 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.094 1+0 records in 00:09:15.094 1+0 records out 00:09:15.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732776 s, 5.6 MB/s 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:15.094 /dev/nbd12 00:09:15.094 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd12 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd12 /proc/partitions 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.353 1+0 records in 00:09:15.353 1+0 records out 00:09:15.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545349 s, 7.5 MB/s 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:15.353 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.354 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:15.354 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:15.354 /dev/nbd13 00:09:15.354 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd13 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd13 /proc/partitions 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.612 1+0 records in 00:09:15.612 1+0 records out 00:09:15.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818279 s, 5.0 MB/s 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.612 08:22:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.612 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd0", 00:09:15.612 "bdev_name": "Nvme0n1" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd1", 00:09:15.612 "bdev_name": "Nvme1n1" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd10", 00:09:15.612 "bdev_name": "Nvme2n1" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd11", 00:09:15.612 "bdev_name": "Nvme2n2" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd12", 00:09:15.612 "bdev_name": "Nvme2n3" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd13", 00:09:15.612 "bdev_name": "Nvme3n1" 00:09:15.612 } 00:09:15.612 ]' 00:09:15.612 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.612 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd0", 00:09:15.612 "bdev_name": "Nvme0n1" 00:09:15.612 }, 00:09:15.612 { 00:09:15.612 "nbd_device": "/dev/nbd1", 00:09:15.612 "bdev_name": "Nvme1n1" 00:09:15.613 }, 00:09:15.613 { 00:09:15.613 "nbd_device": "/dev/nbd10", 00:09:15.613 "bdev_name": "Nvme2n1" 00:09:15.613 }, 00:09:15.613 { 00:09:15.613 "nbd_device": "/dev/nbd11", 00:09:15.613 "bdev_name": "Nvme2n2" 00:09:15.613 }, 00:09:15.613 { 00:09:15.613 "nbd_device": "/dev/nbd12", 00:09:15.613 "bdev_name": "Nvme2n3" 00:09:15.613 }, 00:09:15.613 { 00:09:15.613 "nbd_device": "/dev/nbd13", 00:09:15.613 "bdev_name": "Nvme3n1" 00:09:15.613 } 00:09:15.613 ]' 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.872 /dev/nbd1 00:09:15.872 /dev/nbd10 00:09:15.872 /dev/nbd11 00:09:15.872 /dev/nbd12 00:09:15.872 /dev/nbd13' 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.872 /dev/nbd1 00:09:15.872 /dev/nbd10 00:09:15.872 /dev/nbd11 00:09:15.872 /dev/nbd12 00:09:15.872 /dev/nbd13' 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:15.872 256+0 records in 00:09:15.872 256+0 records out 00:09:15.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741507 s, 141 MB/s 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.872 256+0 records in 00:09:15.872 256+0 records out 00:09:15.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131041 s, 8.0 MB/s 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.872 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:16.131 256+0 records in 00:09:16.131 256+0 records out 00:09:16.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151713 s, 6.9 MB/s 00:09:16.131 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.131 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:16.131 256+0 records in 00:09:16.131 256+0 records out 00:09:16.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135905 s, 7.7 MB/s 00:09:16.131 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.131 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:16.390 256+0 records in 00:09:16.390 256+0 records out 00:09:16.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128193 s, 8.2 MB/s 00:09:16.390 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.390 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:16.651 256+0 records in 00:09:16.651 256+0 records out 00:09:16.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137725 s, 7.6 MB/s 00:09:16.651 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:16.651 256+0 records in 00:09:16.651 256+0 records out 00:09:16.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135434 s, 7.7 MB/s 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.651 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.652 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.911 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.171 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.430 08:22:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.690 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.950 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:18.210 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:18.470 08:22:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:18.730 malloc_lvol_verify 00:09:18.730 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:18.730 284a76e7-e0b8-4e50-9a67-5546042de1ca 00:09:18.730 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:18.989 479bd043-a7c3-4684-b0f5-1e5b152d73f0 00:09:18.989 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:19.248 /dev/nbd0 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:19.248 mke2fs 1.47.0 (5-Feb-2023) 00:09:19.248 Discarding device blocks: 0/4096 done 00:09:19.248 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:19.248 00:09:19.248 Allocating group tables: 0/1 done 00:09:19.248 Writing inode tables: 0/1 done 00:09:19.248 Creating journal (1024 blocks): done 00:09:19.248 Writing superblocks and filesystem accounting information: 0/1 done 00:09:19.248 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.248 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60786 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' -z 60786 ']' 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@961 -- # kill -0 60786 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # uname 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 60786 00:09:19.507 killing process with pid 60786 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@975 -- # echo 'killing process with pid 60786' 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # kill 60786 00:09:19.507 08:22:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@981 -- # wait 60786 00:09:20.885 08:22:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:20.885 00:09:20.885 real 0m11.711s 00:09:20.885 user 0m15.030s 00:09:20.885 sys 0m4.905s 00:09:20.885 08:22:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:20.885 ************************************ 00:09:20.885 08:22:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:20.885 END TEST bdev_nbd 00:09:20.885 ************************************ 00:09:20.885 08:22:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:20.885 08:22:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:20.885 skipping fio tests on NVMe due to multi-ns failures. 00:09:20.885 08:22:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:20.885 08:22:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:20.885 08:22:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:20.885 08:22:08 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:09:20.885 08:22:08 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:20.885 08:22:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.885 ************************************ 00:09:20.885 START TEST bdev_verify 00:09:20.885 ************************************ 00:09:20.885 08:22:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.143 [2024-11-20 08:22:08.476279] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:21.143 [2024-11-20 08:22:08.476453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61182 ] 00:09:21.143 [2024-11-20 08:22:08.683733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.402 [2024-11-20 08:22:08.825793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.402 [2024-11-20 08:22:08.825797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.338 Running I/O for 5 seconds... 00:09:24.234 20224.00 IOPS, 79.00 MiB/s [2024-11-20T08:22:13.170Z] 19392.00 IOPS, 75.75 MiB/s [2024-11-20T08:22:14.103Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-20T08:22:15.039Z] 19696.00 IOPS, 76.94 MiB/s [2024-11-20T08:22:15.039Z] 19660.80 IOPS, 76.80 MiB/s 00:09:27.478 Latency(us) 00:09:27.478 [2024-11-20T08:22:15.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.478 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0xbd0bd 00:09:27.478 Nvme0n1 : 5.05 1621.02 6.33 0.00 0.00 78752.21 18107.94 78327.36 00:09:27.478 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:27.478 Nvme0n1 : 5.06 1619.98 6.33 0.00 0.00 78807.88 17476.27 79590.71 00:09:27.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0xa0000 00:09:27.478 Nvme1n1 : 5.06 1620.06 6.33 0.00 0.00 78642.54 19687.12 69905.07 00:09:27.478 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0xa0000 length 0xa0000 00:09:27.478 Nvme1n1 : 5.06 1618.84 6.32 0.00 0.00 78700.30 19055.45 71168.41 00:09:27.478 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0x80000 00:09:27.478 Nvme2n1 : 5.06 1619.36 6.33 0.00 0.00 78567.15 20739.91 65272.80 00:09:27.478 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x80000 length 0x80000 00:09:27.478 Nvme2n1 : 5.06 1618.23 6.32 0.00 0.00 78429.72 19055.45 59798.31 00:09:27.478 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0x80000 00:09:27.478 Nvme2n2 : 5.06 1618.96 6.32 0.00 0.00 78326.13 20213.51 66115.03 00:09:27.478 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x80000 length 0x80000 00:09:27.478 Nvme2n2 : 5.06 1617.83 6.32 0.00 0.00 78293.06 17581.55 57692.74 00:09:27.478 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0x80000 00:09:27.478 Nvme2n3 : 5.07 1627.18 6.36 0.00 0.00 77804.37 6237.76 67799.49 00:09:27.478 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x80000 length 0x80000 00:09:27.478 Nvme2n3 : 5.08 1626.40 6.35 0.00 0.00 77779.49 4921.78 58113.85 00:09:27.478 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x0 length 0x20000 00:09:27.478 Nvme3n1 : 5.09 1634.83 6.39 0.00 0.00 77395.55 10527.87 68641.72 00:09:27.478 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.478 Verification LBA range: start 0x20000 length 0x20000 00:09:27.478 Nvme3n1 : 5.09 1634.19 6.38 0.00 0.00 77368.96 10527.87 61903.88 00:09:27.478 [2024-11-20T08:22:15.039Z] =================================================================================================================== 00:09:27.478 [2024-11-20T08:22:15.039Z] Total : 19476.86 76.08 0.00 0.00 78236.14 4921.78 79590.71 00:09:28.855 00:09:28.855 real 0m7.747s 00:09:28.855 user 0m14.109s 00:09:28.855 sys 0m0.429s 00:09:28.855 08:22:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:28.855 08:22:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 ************************************ 00:09:28.855 END TEST bdev_verify 00:09:28.855 ************************************ 00:09:28.855 08:22:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:28.855 08:22:16 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:09:28.855 08:22:16 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:28.855 08:22:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.855 ************************************ 00:09:28.855 START TEST bdev_verify_big_io 00:09:28.855 ************************************ 00:09:28.855 08:22:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:28.855 [2024-11-20 08:22:16.268069] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:28.855 [2024-11-20 08:22:16.268192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61281 ] 00:09:29.114 [2024-11-20 08:22:16.451706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.114 [2024-11-20 08:22:16.599979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.114 [2024-11-20 08:22:16.600045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.050 Running I/O for 5 seconds... 00:09:34.736 2133.00 IOPS, 133.31 MiB/s [2024-11-20T08:22:23.234Z] 2882.00 IOPS, 180.12 MiB/s [2024-11-20T08:22:23.493Z] 2423.67 IOPS, 151.48 MiB/s [2024-11-20T08:22:23.493Z] 2464.75 IOPS, 154.05 MiB/s 00:09:35.932 Latency(us) 00:09:35.932 [2024-11-20T08:22:23.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.932 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0xbd0b 00:09:35.932 Nvme0n1 : 5.55 146.65 9.17 0.00 0.00 847966.83 26214.40 1024151.34 00:09:35.932 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:35.932 Nvme0n1 : 5.65 133.06 8.32 0.00 0.00 940938.03 29056.93 1078054.04 00:09:35.932 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0xa000 00:09:35.932 Nvme1n1 : 5.64 155.09 9.69 0.00 0.00 786853.11 58534.97 1024151.34 00:09:35.932 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0xa000 length 0xa000 00:09:35.932 Nvme1n1 : 5.66 135.80 8.49 0.00 0.00 904745.67 32215.29 983724.31 00:09:35.932 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0x8000 00:09:35.932 Nvme2n1 : 5.64 155.58 9.72 0.00 0.00 766145.50 58956.08 882656.75 00:09:35.932 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x8000 length 0x8000 00:09:35.932 Nvme2n1 : 5.66 135.73 8.48 0.00 0.00 883328.45 33899.75 1057840.53 00:09:35.932 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0x8000 00:09:35.932 Nvme2n2 : 5.64 158.89 9.93 0.00 0.00 734775.69 22319.09 909608.10 00:09:35.932 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x8000 length 0x8000 00:09:35.932 Nvme2n2 : 5.66 135.66 8.48 0.00 0.00 861992.10 34110.30 1051102.69 00:09:35.932 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0x8000 00:09:35.932 Nvme2n3 : 5.66 162.16 10.14 0.00 0.00 701912.80 19476.56 936559.45 00:09:35.932 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x8000 length 0x8000 00:09:35.932 Nvme2n3 : 5.72 137.84 8.61 0.00 0.00 823807.41 38321.45 963510.80 00:09:35.932 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x0 length 0x2000 00:09:35.932 Nvme3n1 : 5.75 182.50 11.41 0.00 0.00 610479.25 1388.36 1509275.66 00:09:35.932 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.932 Verification LBA range: start 0x2000 length 0x2000 00:09:35.932 Nvme3n1 : 5.74 151.41 9.46 0.00 0.00 736252.46 6790.48 976986.47 00:09:35.932 [2024-11-20T08:22:23.493Z] =================================================================================================================== 00:09:35.932 [2024-11-20T08:22:23.493Z] Total : 1790.38 111.90 0.00 0.00 791319.57 1388.36 1509275.66 00:09:37.837 00:09:37.837 real 0m8.990s 00:09:37.837 user 0m16.636s 00:09:37.837 sys 0m0.422s 00:09:37.837 08:22:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:37.837 08:22:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 ************************************ 00:09:37.838 END TEST bdev_verify_big_io 00:09:37.838 ************************************ 00:09:37.838 08:22:25 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:37.838 08:22:25 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:09:37.838 08:22:25 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:37.838 08:22:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 ************************************ 00:09:37.838 START TEST bdev_write_zeroes 00:09:37.838 ************************************ 00:09:37.838 08:22:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:37.838 [2024-11-20 08:22:25.336393] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:37.838 [2024-11-20 08:22:25.336510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61396 ] 00:09:38.096 [2024-11-20 08:22:25.519019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.096 [2024-11-20 08:22:25.631994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.032 Running I/O for 1 seconds... 00:09:39.963 76032.00 IOPS, 297.00 MiB/s 00:09:39.963 Latency(us) 00:09:39.963 [2024-11-20T08:22:27.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.963 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme0n1 : 1.02 12645.60 49.40 0.00 0.00 10100.36 8580.22 19687.12 00:09:39.963 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme1n1 : 1.02 12633.49 49.35 0.00 0.00 10098.89 8896.05 19371.28 00:09:39.963 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme2n1 : 1.02 12622.17 49.31 0.00 0.00 10086.11 8632.85 18423.78 00:09:39.963 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme2n2 : 1.02 12646.66 49.40 0.00 0.00 10021.76 6106.17 16528.76 00:09:39.963 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme2n3 : 1.02 12607.33 49.25 0.00 0.00 10036.09 8211.74 16634.04 00:09:39.963 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:39.963 Nvme3n1 : 1.02 12595.98 49.20 0.00 0.00 10033.95 7001.03 18002.66 00:09:39.963 [2024-11-20T08:22:27.524Z] =================================================================================================================== 00:09:39.963 [2024-11-20T08:22:27.524Z] Total : 75751.23 295.90 0.00 0.00 10062.82 6106.17 19687.12 00:09:40.899 00:09:40.899 real 0m3.163s 00:09:40.899 user 0m2.789s 00:09:40.899 sys 0m0.260s 00:09:40.899 08:22:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:40.899 08:22:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:40.899 ************************************ 00:09:40.899 END TEST bdev_write_zeroes 00:09:40.899 ************************************ 00:09:41.156 08:22:28 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.156 08:22:28 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:09:41.156 08:22:28 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:41.156 08:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.156 ************************************ 00:09:41.157 START TEST bdev_json_nonenclosed 00:09:41.157 ************************************ 00:09:41.157 08:22:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.157 [2024-11-20 08:22:28.588564] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:41.157 [2024-11-20 08:22:28.588698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61449 ] 00:09:41.415 [2024-11-20 08:22:28.775747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.415 [2024-11-20 08:22:28.885218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.415 [2024-11-20 08:22:28.885335] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:41.415 [2024-11-20 08:22:28.885357] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:41.415 [2024-11-20 08:22:28.885369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.674 00:09:41.674 real 0m0.652s 00:09:41.674 user 0m0.404s 00:09:41.674 sys 0m0.143s 00:09:41.674 08:22:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:41.674 08:22:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:41.674 ************************************ 00:09:41.674 END TEST bdev_json_nonenclosed 00:09:41.674 ************************************ 00:09:41.674 08:22:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.674 08:22:29 blockdev_nvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:09:41.674 08:22:29 blockdev_nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:41.674 08:22:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.674 ************************************ 00:09:41.674 START TEST bdev_json_nonarray 00:09:41.674 ************************************ 00:09:41.674 08:22:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.933 [2024-11-20 08:22:29.312451] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:41.933 [2024-11-20 08:22:29.312568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:09:42.191 [2024-11-20 08:22:29.494204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.191 [2024-11-20 08:22:29.608332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.191 [2024-11-20 08:22:29.608456] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:42.191 [2024-11-20 08:22:29.608478] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:42.191 [2024-11-20 08:22:29.608491] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.449 00:09:42.449 real 0m0.647s 00:09:42.449 user 0m0.404s 00:09:42.449 sys 0m0.137s 00:09:42.449 08:22:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:42.449 08:22:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:42.449 ************************************ 00:09:42.449 END TEST bdev_json_nonarray 00:09:42.449 ************************************ 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:42.449 08:22:29 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:42.449 00:09:42.449 real 0m45.112s 00:09:42.449 user 1m6.591s 00:09:42.449 sys 0m8.576s 00:09:42.449 08:22:29 blockdev_nvme -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:42.449 08:22:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.449 ************************************ 00:09:42.449 END TEST blockdev_nvme 00:09:42.449 ************************************ 00:09:42.449 08:22:29 -- spdk/autotest.sh@209 -- # uname -s 00:09:42.449 08:22:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:42.449 08:22:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:42.449 08:22:30 -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:42.449 08:22:30 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:42.449 08:22:30 -- common/autotest_common.sh@10 -- # set +x 00:09:42.707 ************************************ 00:09:42.707 START TEST blockdev_nvme_gpt 00:09:42.707 ************************************ 00:09:42.707 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:42.707 * Looking for test storage... 00:09:42.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:42.707 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:42.707 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1638 -- # lcov --version 00:09:42.707 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.967 08:22:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:42.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.967 --rc genhtml_branch_coverage=1 00:09:42.967 --rc genhtml_function_coverage=1 00:09:42.967 --rc genhtml_legend=1 00:09:42.967 --rc geninfo_all_blocks=1 00:09:42.967 --rc geninfo_unexecuted_blocks=1 00:09:42.967 00:09:42.967 ' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:42.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.967 --rc genhtml_branch_coverage=1 00:09:42.967 --rc genhtml_function_coverage=1 00:09:42.967 --rc genhtml_legend=1 00:09:42.967 --rc geninfo_all_blocks=1 00:09:42.967 --rc geninfo_unexecuted_blocks=1 00:09:42.967 00:09:42.967 ' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:42.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.967 --rc genhtml_branch_coverage=1 00:09:42.967 --rc genhtml_function_coverage=1 00:09:42.967 --rc genhtml_legend=1 00:09:42.967 --rc geninfo_all_blocks=1 00:09:42.967 --rc geninfo_unexecuted_blocks=1 00:09:42.967 00:09:42.967 ' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:42.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.967 --rc genhtml_branch_coverage=1 00:09:42.967 --rc genhtml_function_coverage=1 00:09:42.967 --rc genhtml_legend=1 00:09:42.967 --rc geninfo_all_blocks=1 00:09:42.967 --rc geninfo_unexecuted_blocks=1 00:09:42.967 00:09:42.967 ' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61570 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61570 00:09:42.967 08:22:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # '[' -z 61570 ']' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:42.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:42.967 08:22:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 [2024-11-20 08:22:30.432957] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:42.967 [2024-11-20 08:22:30.433109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61570 ] 00:09:43.227 [2024-11-20 08:22:30.620243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.227 [2024-11-20 08:22:30.727768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.170 08:22:31 blockdev_nvme_gpt -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:44.170 08:22:31 blockdev_nvme_gpt -- common/autotest_common.sh@871 -- # return 0 00:09:44.170 08:22:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:44.170 08:22:31 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:09:44.170 08:22:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.006 Waiting for block devices as requested 00:09:45.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.264 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.536 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1602 -- # zoned_devs=() 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1602 -- # local -gA zoned_devs 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1603 -- # local nvme bdf 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme0n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme2n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n2 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme2n2 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n3 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme2n3 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3c3n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme3c3n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1595 -- # local device=nvme3n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:50.536 BYT; 00:09:50.536 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:50.536 BYT; 00:09:50.536 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:50.536 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:50.537 08:22:37 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:50.537 08:22:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:51.474 The operation has completed successfully. 00:09:51.474 08:22:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:52.849 The operation has completed successfully. 00:09:52.849 08:22:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.985 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.985 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.985 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.985 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.244 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:54.244 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.245 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.245 [] 00:09:54.245 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.245 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:54.245 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:54.245 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:54.245 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:54.245 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:54.245 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.245 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.503 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.503 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:54.503 08:22:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.503 08:22:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.503 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.503 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:54.503 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.503 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.503 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.762 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.762 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:54.762 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.762 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:54.762 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:54.762 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4ba190c9-db9b-4f02-a431-2ed5554ae570"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4ba190c9-db9b-4f02-a431-2ed5554ae570",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b9056281-2e4d-437c-9df7-8e1e97925ccd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b9056281-2e4d-437c-9df7-8e1e97925ccd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6e5f553f-b625-4783-a554-8a9f43cc22ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6e5f553f-b625-4783-a554-8a9f43cc22ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2632beb7-f02d-4e8c-aa51-f1a1b2e70f05"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2632beb7-f02d-4e8c-aa51-f1a1b2e70f05",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1248a879-0196-439b-ab17-39e20b4c933f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1248a879-0196-439b-ab17-39e20b4c933f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:54.763 08:22:42 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61570 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' -z 61570 ']' 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@961 -- # kill -0 61570 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # uname 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 61570 00:09:54.763 killing process with pid 61570 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@975 -- # echo 'killing process with pid 61570' 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # kill 61570 00:09:54.763 08:22:42 blockdev_nvme_gpt -- common/autotest_common.sh@981 -- # wait 61570 00:09:57.296 08:22:44 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:57.296 08:22:44 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:57.296 08:22:44 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 7 -le 1 ']' 00:09:57.296 08:22:44 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:57.296 08:22:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:57.296 ************************************ 00:09:57.296 START TEST bdev_hello_world 00:09:57.296 ************************************ 00:09:57.296 08:22:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:57.555 [2024-11-20 08:22:44.893188] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:57.555 [2024-11-20 08:22:44.893318] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:09:57.555 [2024-11-20 08:22:45.076291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.814 [2024-11-20 08:22:45.211620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.383 [2024-11-20 08:22:45.915406] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:58.383 [2024-11-20 08:22:45.915459] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:58.383 [2024-11-20 08:22:45.915485] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:58.383 [2024-11-20 08:22:45.918760] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:58.383 [2024-11-20 08:22:45.919470] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:58.383 [2024-11-20 08:22:45.919509] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:58.383 [2024-11-20 08:22:45.919739] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:58.383 00:09:58.383 [2024-11-20 08:22:45.919761] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:59.763 ************************************ 00:09:59.763 END TEST bdev_hello_world 00:09:59.763 ************************************ 00:09:59.763 00:09:59.763 real 0m2.309s 00:09:59.763 user 0m1.886s 00:09:59.763 sys 0m0.311s 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:59.763 08:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:59.763 08:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:59.763 08:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:59.763 08:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.763 ************************************ 00:09:59.763 START TEST bdev_bounds 00:09:59.763 ************************************ 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1132 -- # bdev_bounds '' 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62260 00:09:59.763 Process bdevio pid: 62260 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62260' 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62260 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # '[' -z 62260 ']' 00:09:59.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:59.763 08:22:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:59.763 [2024-11-20 08:22:47.280479] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:59.763 [2024-11-20 08:22:47.280607] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:10:00.022 [2024-11-20 08:22:47.468292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.281 [2024-11-20 08:22:47.607369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.281 [2024-11-20 08:22:47.607515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.281 [2024-11-20 08:22:47.607558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.848 08:22:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:10:00.848 08:22:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@871 -- # return 0 00:10:00.848 08:22:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:01.107 I/O targets: 00:10:01.107 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:01.107 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:01.107 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:01.107 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.107 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.107 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.107 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:01.107 00:10:01.107 00:10:01.107 CUnit - A unit testing framework for C - Version 2.1-3 00:10:01.107 http://cunit.sourceforge.net/ 00:10:01.107 00:10:01.107 00:10:01.107 Suite: bdevio tests on: Nvme3n1 00:10:01.107 Test: blockdev write read block ...passed 00:10:01.107 Test: blockdev write zeroes read block ...passed 00:10:01.107 Test: blockdev write zeroes read no split ...passed 00:10:01.107 Test: blockdev write zeroes read split ...passed 00:10:01.107 Test: blockdev write zeroes read split partial ...passed 00:10:01.107 Test: blockdev reset ...[2024-11-20 08:22:48.531300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:01.107 [2024-11-20 08:22:48.536234] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:01.107 passed 00:10:01.107 Test: blockdev write read 8 blocks ...passed 00:10:01.107 Test: blockdev write read size > 128k ...passed 00:10:01.107 Test: blockdev write read invalid size ...passed 00:10:01.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.107 Test: blockdev write read max offset ...passed 00:10:01.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.107 Test: blockdev writev readv 8 blocks ...passed 00:10:01.107 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.107 Test: blockdev writev readv block ...passed 00:10:01.107 Test: blockdev writev readv size > 128k ...passed 00:10:01.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.107 Test: blockdev comparev and writev ...[2024-11-20 08:22:48.549099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x244404000 len:0x1000 00:10:01.107 [2024-11-20 08:22:48.549316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.107 passed 00:10:01.107 Test: blockdev nvme passthru rw ...passed 00:10:01.107 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:22:48.550580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:10:01.107 Test: blockdev nvme admin passthru ...RP2 0x0 00:10:01.107 [2024-11-20 08:22:48.550713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.107 passed 00:10:01.107 Test: blockdev copy ...passed 00:10:01.107 Suite: bdevio tests on: Nvme2n3 00:10:01.107 Test: blockdev write read block ...passed 00:10:01.107 Test: blockdev write zeroes read block ...passed 00:10:01.107 Test: blockdev write zeroes read no split ...passed 00:10:01.107 Test: blockdev write zeroes read split ...passed 00:10:01.107 Test: blockdev write zeroes read split partial ...passed 00:10:01.107 Test: blockdev reset ...[2024-11-20 08:22:48.627823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.107 passed 00:10:01.107 Test: blockdev write read 8 blocks ...[2024-11-20 08:22:48.632439] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:01.107 passed 00:10:01.107 Test: blockdev write read size > 128k ...passed 00:10:01.107 Test: blockdev write read invalid size ...passed 00:10:01.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.107 Test: blockdev write read max offset ...passed 00:10:01.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.107 Test: blockdev writev readv 8 blocks ...passed 00:10:01.107 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.107 Test: blockdev writev readv block ...passed 00:10:01.107 Test: blockdev writev readv size > 128k ...passed 00:10:01.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.107 Test: blockdev comparev and writev ...[2024-11-20 08:22:48.642562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x244402000 len:0x1000 00:10:01.107 [2024-11-20 08:22:48.642612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.107 passed 00:10:01.107 Test: blockdev nvme passthru rw ...passed 00:10:01.107 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.107 Test: blockdev nvme admin passthru ...[2024-11-20 08:22:48.643600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.107 [2024-11-20 08:22:48.643643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.107 passed 00:10:01.107 Test: blockdev copy ...passed 00:10:01.107 Suite: bdevio tests on: Nvme2n2 00:10:01.107 Test: blockdev write read block ...passed 00:10:01.107 Test: blockdev write zeroes read block ...passed 00:10:01.107 Test: blockdev write zeroes read no split ...passed 00:10:01.367 Test: blockdev write zeroes read split ...passed 00:10:01.367 Test: blockdev write zeroes read split partial ...passed 00:10:01.367 Test: blockdev reset ...[2024-11-20 08:22:48.722059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.367 [2024-11-20 08:22:48.726801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:10:01.367 00:10:01.367 Test: blockdev write read 8 blocks ...passed 00:10:01.367 Test: blockdev write read size > 128k ...passed 00:10:01.367 Test: blockdev write read invalid size ...passed 00:10:01.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.367 Test: blockdev write read max offset ...passed 00:10:01.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.367 Test: blockdev writev readv 8 blocks ...passed 00:10:01.367 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.367 Test: blockdev writev readv block ...passed 00:10:01.367 Test: blockdev writev readv size > 128k ...passed 00:10:01.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.367 Test: blockdev comparev and writev ...[2024-11-20 08:22:48.738280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x25fc38000 len:0x1000 00:10:01.367 [2024-11-20 08:22:48.738481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.367 passed 00:10:01.367 Test: blockdev nvme passthru rw ...passed 00:10:01.367 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:22:48.740032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.367 [2024-11-20 08:22:48.740358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:01.367 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:01.367 passed 00:10:01.367 Test: blockdev copy ...passed 00:10:01.367 Suite: bdevio tests on: Nvme2n1 00:10:01.367 Test: blockdev write read block ...passed 00:10:01.367 Test: blockdev write zeroes read block ...passed 00:10:01.367 Test: blockdev write zeroes read no split ...passed 00:10:01.367 Test: blockdev write zeroes read split ...passed 00:10:01.367 Test: blockdev write zeroes read split partial ...passed 00:10:01.367 Test: blockdev reset ...[2024-11-20 08:22:48.820009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.367 [2024-11-20 08:22:48.824520] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:01.367 passed 00:10:01.367 Test: blockdev write read 8 blocks ...passed 00:10:01.367 Test: blockdev write read size > 128k ...passed 00:10:01.367 Test: blockdev write read invalid size ...passed 00:10:01.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.367 Test: blockdev write read max offset ...passed 00:10:01.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.367 Test: blockdev writev readv 8 blocks ...passed 00:10:01.367 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.367 Test: blockdev writev readv block ...passed 00:10:01.367 Test: blockdev writev readv size > 128k ...passed 00:10:01.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.367 Test: blockdev comparev and writev ...[2024-11-20 08:22:48.835847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x25fc34000 len:0x1000 00:10:01.367 [2024-11-20 08:22:48.836165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.367 passed 00:10:01.367 Test: blockdev nvme passthru rw ...passed 00:10:01.367 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:22:48.837877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.367 [2024-11-20 08:22:48.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:01.367 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:01.367 passed 00:10:01.367 Test: blockdev copy ...passed 00:10:01.367 Suite: bdevio tests on: Nvme1n1p2 00:10:01.367 Test: blockdev write read block ...passed 00:10:01.367 Test: blockdev write zeroes read block ...passed 00:10:01.367 Test: blockdev write zeroes read no split ...passed 00:10:01.367 Test: blockdev write zeroes read split ...passed 00:10:01.367 Test: blockdev write zeroes read split partial ...passed 00:10:01.367 Test: blockdev reset ...[2024-11-20 08:22:48.914085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:01.367 passed 00:10:01.367 Test: blockdev write read 8 blocks ...[2024-11-20 08:22:48.918262] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:01.367 passed 00:10:01.367 Test: blockdev write read size > 128k ...passed 00:10:01.367 Test: blockdev write read invalid size ...passed 00:10:01.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.367 Test: blockdev write read max offset ...passed 00:10:01.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.367 Test: blockdev writev readv 8 blocks ...passed 00:10:01.367 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.367 Test: blockdev writev readv block ...passed 00:10:01.627 Test: blockdev writev readv size > 128k ...passed 00:10:01.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.627 Test: blockdev comparev and writev ...[2024-11-20 08:22:48.928542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:10:01.627 Test: blockdev nvme passthru rw ...passed 00:10:01.627 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.627 Test: blockdev nvme admin passthru ...passed 00:10:01.627 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x25fc30000 len:0x1000 00:10:01.627 [2024-11-20 08:22:48.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.627 passed 00:10:01.627 Suite: bdevio tests on: Nvme1n1p1 00:10:01.627 Test: blockdev write read block ...passed 00:10:01.627 Test: blockdev write zeroes read block ...passed 00:10:01.627 Test: blockdev write zeroes read no split ...passed 00:10:01.627 Test: blockdev write zeroes read split ...passed 00:10:01.627 Test: blockdev write zeroes read split partial ...passed 00:10:01.627 Test: blockdev reset ...[2024-11-20 08:22:48.999521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:01.627 [2024-11-20 08:22:49.003573] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:01.627 passed 00:10:01.627 Test: blockdev write read 8 blocks ...passed 00:10:01.627 Test: blockdev write read size > 128k ...passed 00:10:01.627 Test: blockdev write read invalid size ...passed 00:10:01.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.627 Test: blockdev write read max offset ...passed 00:10:01.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.627 Test: blockdev writev readv 8 blocks ...passed 00:10:01.627 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.627 Test: blockdev writev readv block ...passed 00:10:01.627 Test: blockdev writev readv size > 128k ...passed 00:10:01.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.627 Test: blockdev comparev and writev ...[2024-11-20 08:22:49.014786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x242a0e000 len:0x1000 00:10:01.627 [2024-11-20 08:22:49.014968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.627 passed 00:10:01.627 Test: blockdev nvme passthru rw ...passed 00:10:01.627 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.627 Test: blockdev nvme admin passthru ...passed 00:10:01.627 Test: blockdev copy ...passed 00:10:01.627 Suite: bdevio tests on: Nvme0n1 00:10:01.627 Test: blockdev write read block ...passed 00:10:01.627 Test: blockdev write zeroes read block ...passed 00:10:01.627 Test: blockdev write zeroes read no split ...passed 00:10:01.627 Test: blockdev write zeroes read split ...passed 00:10:01.627 Test: blockdev write zeroes read split partial ...passed 00:10:01.627 Test: blockdev reset ...[2024-11-20 08:22:49.086641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:01.627 passed 00:10:01.627 Test: blockdev write read 8 blocks ...[2024-11-20 08:22:49.090776] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:01.627 passed 00:10:01.627 Test: blockdev write read size > 128k ...passed 00:10:01.627 Test: blockdev write read invalid size ...passed 00:10:01.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.627 Test: blockdev write read max offset ...passed 00:10:01.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.627 Test: blockdev writev readv 8 blocks ...passed 00:10:01.627 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.627 Test: blockdev writev readv block ...passed 00:10:01.627 Test: blockdev writev readv size > 128k ...passed 00:10:01.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.627 Test: blockdev comparev and writev ...passed 00:10:01.627 Test: blockdev nvme passthru rw ...[2024-11-20 08:22:49.099648] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:01.627 separate metadata which is not supported yet. 00:10:01.627 passed 00:10:01.627 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:22:49.100194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:01.627 [2024-11-20 08:22:49.100241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:01.627 passed 00:10:01.627 Test: blockdev nvme admin passthru ...passed 00:10:01.627 Test: blockdev copy ...passed 00:10:01.627 00:10:01.627 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.627 suites 7 7 n/a 0 0 00:10:01.627 tests 161 161 161 0 0 00:10:01.627 asserts 1025 1025 1025 0 n/a 00:10:01.627 00:10:01.627 Elapsed time = 1.744 seconds 00:10:01.627 0 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62260 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' -z 62260 ']' 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@961 -- # kill -0 62260 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # uname 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 62260 00:10:01.627 killing process with pid 62260 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@975 -- # echo 'killing process with pid 62260' 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # kill 62260 00:10:01.627 08:22:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@981 -- # wait 62260 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:03.006 00:10:03.006 real 0m3.119s 00:10:03.006 user 0m7.861s 00:10:03.006 sys 0m0.499s 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:03.006 ************************************ 00:10:03.006 END TEST bdev_bounds 00:10:03.006 ************************************ 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:03.006 08:22:50 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:03.006 08:22:50 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:10:03.006 08:22:50 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:03.006 08:22:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:03.006 ************************************ 00:10:03.006 START TEST bdev_nbd 00:10:03.006 ************************************ 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1132 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62331 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:10:03.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # '[' -z 62331 ']' 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@843 -- # local max_retries=100 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@847 -- # xtrace_disable 00:10:03.006 08:22:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:03.006 [2024-11-20 08:22:50.476291] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:03.006 [2024-11-20 08:22:50.476579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.265 [2024-11-20 08:22:50.660371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.265 [2024-11-20 08:22:50.795954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # return 0 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.201 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.461 1+0 records in 00:10:04.461 1+0 records out 00:10:04.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920727 s, 4.4 MB/s 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.461 08:22:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.720 1+0 records in 00:10:04.720 1+0 records out 00:10:04.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740861 s, 5.5 MB/s 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:04.720 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:04.721 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.721 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.721 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd2 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd2 /proc/partitions 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.980 1+0 records in 00:10:04.980 1+0 records out 00:10:04.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659134 s, 6.2 MB/s 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.980 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd3 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd3 /proc/partitions 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.240 1+0 records in 00:10:05.240 1+0 records out 00:10:05.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686372 s, 6.0 MB/s 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.240 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd4 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd4 /proc/partitions 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.500 1+0 records in 00:10:05.500 1+0 records out 00:10:05.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887615 s, 4.6 MB/s 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.500 08:22:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd5 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd5 /proc/partitions 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.759 1+0 records in 00:10:05.759 1+0 records out 00:10:05.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624127 s, 6.6 MB/s 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.759 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.760 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd6 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd6 /proc/partitions 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.019 1+0 records in 00:10:06.019 1+0 records out 00:10:06.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079429 s, 5.2 MB/s 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:06.019 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.278 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:06.278 { 00:10:06.278 "nbd_device": "/dev/nbd0", 00:10:06.278 "bdev_name": "Nvme0n1" 00:10:06.278 }, 00:10:06.278 { 00:10:06.278 "nbd_device": "/dev/nbd1", 00:10:06.279 "bdev_name": "Nvme1n1p1" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd2", 00:10:06.279 "bdev_name": "Nvme1n1p2" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd3", 00:10:06.279 "bdev_name": "Nvme2n1" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd4", 00:10:06.279 "bdev_name": "Nvme2n2" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd5", 00:10:06.279 "bdev_name": "Nvme2n3" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd6", 00:10:06.279 "bdev_name": "Nvme3n1" 00:10:06.279 } 00:10:06.279 ]' 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd0", 00:10:06.279 "bdev_name": "Nvme0n1" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd1", 00:10:06.279 "bdev_name": "Nvme1n1p1" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd2", 00:10:06.279 "bdev_name": "Nvme1n1p2" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd3", 00:10:06.279 "bdev_name": "Nvme2n1" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd4", 00:10:06.279 "bdev_name": "Nvme2n2" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd5", 00:10:06.279 "bdev_name": "Nvme2n3" 00:10:06.279 }, 00:10:06.279 { 00:10:06.279 "nbd_device": "/dev/nbd6", 00:10:06.279 "bdev_name": "Nvme3n1" 00:10:06.279 } 00:10:06.279 ]' 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:06.279 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.539 08:22:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.539 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:06.798 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:06.798 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:06.798 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:06.798 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.798 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.799 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:06.799 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.799 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.799 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.799 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.058 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.318 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.577 08:22:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:07.835 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.836 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.094 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:08.372 /dev/nbd0 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.372 1+0 records in 00:10:08.372 1+0 records out 00:10:08.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667206 s, 6.1 MB/s 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.372 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:08.630 /dev/nbd1 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:08.630 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.630 1+0 records in 00:10:08.630 1+0 records out 00:10:08.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704045 s, 5.8 MB/s 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.631 08:22:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:08.889 /dev/nbd10 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd10 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd10 /proc/partitions 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.889 1+0 records in 00:10:08.889 1+0 records out 00:10:08.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765625 s, 5.3 MB/s 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.889 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:09.147 /dev/nbd11 00:10:09.147 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:09.147 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:09.147 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd11 00:10:09.147 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd11 /proc/partitions 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.148 1+0 records in 00:10:09.148 1+0 records out 00:10:09.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714046 s, 5.7 MB/s 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.148 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:09.148 /dev/nbd12 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd12 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:09.406 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd12 /proc/partitions 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.407 1+0 records in 00:10:09.407 1+0 records out 00:10:09.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088634 s, 4.6 MB/s 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.407 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:09.407 /dev/nbd13 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd13 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd13 /proc/partitions 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:09.666 08:22:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.666 1+0 records in 00:10:09.666 1+0 records out 00:10:09.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879127 s, 4.7 MB/s 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.666 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:09.924 /dev/nbd14 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd14 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd14 /proc/partitions 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.924 1+0 records in 00:10:09.924 1+0 records out 00:10:09.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841831 s, 4.9 MB/s 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.924 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd0", 00:10:10.183 "bdev_name": "Nvme0n1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd1", 00:10:10.183 "bdev_name": "Nvme1n1p1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd10", 00:10:10.183 "bdev_name": "Nvme1n1p2" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd11", 00:10:10.183 "bdev_name": "Nvme2n1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd12", 00:10:10.183 "bdev_name": "Nvme2n2" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd13", 00:10:10.183 "bdev_name": "Nvme2n3" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd14", 00:10:10.183 "bdev_name": "Nvme3n1" 00:10:10.183 } 00:10:10.183 ]' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd0", 00:10:10.183 "bdev_name": "Nvme0n1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd1", 00:10:10.183 "bdev_name": "Nvme1n1p1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd10", 00:10:10.183 "bdev_name": "Nvme1n1p2" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd11", 00:10:10.183 "bdev_name": "Nvme2n1" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd12", 00:10:10.183 "bdev_name": "Nvme2n2" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd13", 00:10:10.183 "bdev_name": "Nvme2n3" 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "nbd_device": "/dev/nbd14", 00:10:10.183 "bdev_name": "Nvme3n1" 00:10:10.183 } 00:10:10.183 ]' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:10.183 /dev/nbd1 00:10:10.183 /dev/nbd10 00:10:10.183 /dev/nbd11 00:10:10.183 /dev/nbd12 00:10:10.183 /dev/nbd13 00:10:10.183 /dev/nbd14' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:10.183 /dev/nbd1 00:10:10.183 /dev/nbd10 00:10:10.183 /dev/nbd11 00:10:10.183 /dev/nbd12 00:10:10.183 /dev/nbd13 00:10:10.183 /dev/nbd14' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:10.183 256+0 records in 00:10:10.183 256+0 records out 00:10:10.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128929 s, 81.3 MB/s 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:10.183 256+0 records in 00:10:10.183 256+0 records out 00:10:10.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14636 s, 7.2 MB/s 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.183 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:10.441 256+0 records in 00:10:10.441 256+0 records out 00:10:10.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151766 s, 6.9 MB/s 00:10:10.441 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.441 08:22:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:10.699 256+0 records in 00:10:10.699 256+0 records out 00:10:10.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14902 s, 7.0 MB/s 00:10:10.699 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.699 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:10.699 256+0 records in 00:10:10.699 256+0 records out 00:10:10.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150582 s, 7.0 MB/s 00:10:10.699 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.699 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:10.957 256+0 records in 00:10:10.957 256+0 records out 00:10:10.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146263 s, 7.2 MB/s 00:10:10.957 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.957 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:10.957 256+0 records in 00:10:10.957 256+0 records out 00:10:10.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145611 s, 7.2 MB/s 00:10:10.958 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.958 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:11.216 256+0 records in 00:10:11.216 256+0 records out 00:10:11.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148541 s, 7.1 MB/s 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.216 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:11.474 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.475 08:22:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.733 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.992 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.250 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.508 08:22:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.766 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:13.024 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:13.281 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:13.282 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:13.282 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.282 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:13.282 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:13.282 malloc_lvol_verify 00:10:13.282 08:23:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:13.540 0d67e3da-02f2-4424-9e2a-9c8131739815 00:10:13.540 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:13.798 cbbefc74-21f3-4dc8-bb21-35019c11a1d4 00:10:13.798 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:14.057 /dev/nbd0 00:10:14.057 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:14.057 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:14.057 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:14.057 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:14.057 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:14.057 mke2fs 1.47.0 (5-Feb-2023) 00:10:14.057 Discarding device blocks: 0/4096 done 00:10:14.057 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:14.057 00:10:14.057 Allocating group tables: 0/1 done 00:10:14.057 Writing inode tables: 0/1 done 00:10:14.058 Creating journal (1024 blocks): done 00:10:14.058 Writing superblocks and filesystem accounting information: 0/1 done 00:10:14.058 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.058 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62331 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' -z 62331 ']' 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@961 -- # kill -0 62331 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # uname 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 62331 00:10:14.316 killing process with pid 62331 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@975 -- # echo 'killing process with pid 62331' 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # kill 62331 00:10:14.316 08:23:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@981 -- # wait 62331 00:10:15.694 08:23:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:15.695 00:10:15.695 real 0m12.712s 00:10:15.695 user 0m16.140s 00:10:15.695 sys 0m5.450s 00:10:15.695 08:23:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:15.695 08:23:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:15.695 ************************************ 00:10:15.695 END TEST bdev_nbd 00:10:15.695 ************************************ 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:15.695 skipping fio tests on NVMe due to multi-ns failures. 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:15.695 08:23:03 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:15.695 08:23:03 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:10:15.695 08:23:03 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:15.695 08:23:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:15.695 ************************************ 00:10:15.695 START TEST bdev_verify 00:10:15.695 ************************************ 00:10:15.695 08:23:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:15.695 [2024-11-20 08:23:03.246032] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:15.695 [2024-11-20 08:23:03.246668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62759 ] 00:10:15.969 [2024-11-20 08:23:03.429337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.232 [2024-11-20 08:23:03.565740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.232 [2024-11-20 08:23:03.565785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.801 Running I/O for 5 seconds... 00:10:19.107 20992.00 IOPS, 82.00 MiB/s [2024-11-20T08:23:07.600Z] 20864.00 IOPS, 81.50 MiB/s [2024-11-20T08:23:08.565Z] 21077.33 IOPS, 82.33 MiB/s [2024-11-20T08:23:09.501Z] 20608.00 IOPS, 80.50 MiB/s [2024-11-20T08:23:09.501Z] 21043.20 IOPS, 82.20 MiB/s 00:10:21.940 Latency(us) 00:10:21.940 [2024-11-20T08:23:09.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.940 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0xbd0bd 00:10:21.940 Nvme0n1 : 5.06 1479.28 5.78 0.00 0.00 86121.97 9685.64 91803.04 00:10:21.940 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:21.940 Nvme0n1 : 5.05 1471.17 5.75 0.00 0.00 86687.19 20108.23 95593.07 00:10:21.940 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x4ff80 00:10:21.940 Nvme1n1p1 : 5.06 1478.82 5.78 0.00 0.00 85986.23 10001.48 85065.20 00:10:21.940 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:21.940 Nvme1n1p1 : 5.05 1470.73 5.75 0.00 0.00 86554.73 19266.00 88855.24 00:10:21.940 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x4ff7f 00:10:21.940 Nvme1n1p2 : 5.08 1486.90 5.81 0.00 0.00 85524.57 12264.97 73695.10 00:10:21.940 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:21.940 Nvme1n1p2 : 5.07 1476.05 5.77 0.00 0.00 85983.87 7369.51 77064.02 00:10:21.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x80000 00:10:21.940 Nvme2n1 : 5.08 1486.53 5.81 0.00 0.00 85411.96 12264.97 66115.03 00:10:21.940 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x80000 length 0x80000 00:10:21.940 Nvme2n1 : 5.09 1484.96 5.80 0.00 0.00 85466.96 10369.95 68220.61 00:10:21.940 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x80000 00:10:21.940 Nvme2n2 : 5.08 1486.20 5.81 0.00 0.00 85272.63 11791.22 61061.65 00:10:21.940 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x80000 length 0x80000 00:10:21.940 Nvme2n2 : 5.09 1484.64 5.80 0.00 0.00 85317.39 10054.12 64009.46 00:10:21.940 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x80000 00:10:21.940 Nvme2n3 : 5.08 1485.86 5.80 0.00 0.00 85126.38 11212.18 60640.54 00:10:21.940 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x80000 length 0x80000 00:10:21.940 Nvme2n3 : 5.09 1484.30 5.80 0.00 0.00 85175.66 9738.28 62325.00 00:10:21.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x0 length 0x20000 00:10:21.940 Nvme3n1 : 5.08 1485.44 5.80 0.00 0.00 84981.43 11001.63 63167.23 00:10:21.940 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:21.940 Verification LBA range: start 0x20000 length 0x20000 00:10:21.940 Nvme3n1 : 5.09 1483.96 5.80 0.00 0.00 85039.74 9685.64 64009.46 00:10:21.940 [2024-11-20T08:23:09.501Z] =================================================================================================================== 00:10:21.940 [2024-11-20T08:23:09.501Z] Total : 20744.84 81.03 0.00 0.00 85614.72 7369.51 95593.07 00:10:23.842 00:10:23.842 real 0m7.844s 00:10:23.842 user 0m14.394s 00:10:23.842 sys 0m0.393s 00:10:23.842 08:23:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:23.842 08:23:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:23.842 ************************************ 00:10:23.842 END TEST bdev_verify 00:10:23.842 ************************************ 00:10:23.842 08:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:23.842 08:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:10:23.842 08:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:23.842 08:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.842 ************************************ 00:10:23.842 START TEST bdev_verify_big_io 00:10:23.842 ************************************ 00:10:23.842 08:23:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:23.842 [2024-11-20 08:23:11.172187] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:23.842 [2024-11-20 08:23:11.172308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62863 ] 00:10:23.842 [2024-11-20 08:23:11.357509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.100 [2024-11-20 08:23:11.494070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.100 [2024-11-20 08:23:11.494104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.034 Running I/O for 5 seconds... 00:10:29.257 2314.00 IOPS, 144.62 MiB/s [2024-11-20T08:23:17.754Z] 3139.50 IOPS, 196.22 MiB/s [2024-11-20T08:23:18.321Z] 2738.67 IOPS, 171.17 MiB/s [2024-11-20T08:23:18.321Z] 2811.75 IOPS, 175.73 MiB/s 00:10:30.760 Latency(us) 00:10:30.760 [2024-11-20T08:23:18.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.760 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0xbd0b 00:10:30.760 Nvme0n1 : 5.71 131.79 8.24 0.00 0.00 929107.76 32846.96 1233024.31 00:10:30.760 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:30.760 Nvme0n1 : 5.66 124.66 7.79 0.00 0.00 994491.15 25372.17 1637294.57 00:10:30.760 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x4ff8 00:10:30.760 Nvme1n1p1 : 5.71 139.64 8.73 0.00 0.00 858270.13 90960.81 835491.88 00:10:30.760 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:30.760 Nvme1n1p1 : 5.72 144.95 9.06 0.00 0.00 830117.21 75800.67 801802.69 00:10:30.760 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x4ff7 00:10:30.760 Nvme1n1p2 : 5.71 145.66 9.10 0.00 0.00 817577.16 47585.98 842229.72 00:10:30.760 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:30.760 Nvme1n1p2 : 5.78 150.80 9.43 0.00 0.00 785509.24 52428.80 754637.83 00:10:30.760 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x8000 00:10:30.760 Nvme2n1 : 5.77 151.07 9.44 0.00 0.00 770987.03 13265.12 805171.61 00:10:30.760 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x8000 length 0x8000 00:10:30.760 Nvme2n1 : 5.79 151.42 9.46 0.00 0.00 763913.65 53692.14 761375.67 00:10:30.760 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x8000 00:10:30.760 Nvme2n2 : 5.73 150.45 9.40 0.00 0.00 758942.59 13791.51 848967.56 00:10:30.760 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x8000 length 0x8000 00:10:30.760 Nvme2n2 : 5.79 145.66 9.10 0.00 0.00 778101.87 54323.82 1522751.33 00:10:30.760 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x8000 00:10:30.760 Nvme2n3 : 5.80 154.69 9.67 0.00 0.00 715886.23 42111.49 859074.31 00:10:30.760 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x8000 length 0x8000 00:10:30.760 Nvme2n3 : 5.83 157.51 9.84 0.00 0.00 706039.67 16634.04 1549702.68 00:10:30.760 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:30.760 Verification LBA range: start 0x0 length 0x2000 00:10:30.760 Nvme3n1 : 5.84 175.79 10.99 0.00 0.00 619936.69 6658.88 859074.31 00:10:30.760 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:30.761 Verification LBA range: start 0x2000 length 0x2000 00:10:30.761 Nvme3n1 : 5.84 167.01 10.44 0.00 0.00 650119.78 5184.98 1563178.36 00:10:30.761 [2024-11-20T08:23:18.322Z] =================================================================================================================== 00:10:30.761 [2024-11-20T08:23:18.322Z] Total : 2091.12 130.69 0.00 0.00 775453.29 5184.98 1637294.57 00:10:33.295 00:10:33.295 real 0m9.255s 00:10:33.295 user 0m17.182s 00:10:33.295 sys 0m0.420s 00:10:33.295 08:23:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:33.295 08:23:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:33.295 ************************************ 00:10:33.295 END TEST bdev_verify_big_io 00:10:33.295 ************************************ 00:10:33.295 08:23:20 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.295 08:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:10:33.295 08:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:33.295 08:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.295 ************************************ 00:10:33.295 START TEST bdev_write_zeroes 00:10:33.295 ************************************ 00:10:33.295 08:23:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.295 [2024-11-20 08:23:20.501211] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:33.296 [2024-11-20 08:23:20.501332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:10:33.296 [2024-11-20 08:23:20.684195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.296 [2024-11-20 08:23:20.816593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.255 Running I/O for 1 seconds... 00:10:35.187 66304.00 IOPS, 259.00 MiB/s 00:10:35.187 Latency(us) 00:10:35.187 [2024-11-20T08:23:22.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.187 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme0n1 : 1.02 9450.67 36.92 0.00 0.00 13497.41 11896.49 31583.61 00:10:35.187 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme1n1p1 : 1.02 9439.70 36.87 0.00 0.00 13490.98 12159.69 32636.40 00:10:35.187 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme1n1p2 : 1.02 9429.35 36.83 0.00 0.00 13459.92 11580.66 30951.94 00:10:35.187 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme2n1 : 1.03 9471.83 37.00 0.00 0.00 13341.76 8106.46 23898.27 00:10:35.187 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme2n2 : 1.03 9463.29 36.97 0.00 0.00 13306.05 8159.10 21687.42 00:10:35.187 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme2n3 : 1.03 9454.72 36.93 0.00 0.00 13289.81 8264.38 21792.69 00:10:35.187 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:35.187 Nvme3n1 : 1.03 9446.24 36.90 0.00 0.00 13265.77 8211.74 23371.87 00:10:35.187 [2024-11-20T08:23:22.748Z] =================================================================================================================== 00:10:35.187 [2024-11-20T08:23:22.748Z] Total : 66155.81 258.42 0.00 0.00 13378.52 8106.46 32636.40 00:10:36.561 00:10:36.561 real 0m3.446s 00:10:36.561 user 0m2.989s 00:10:36.561 sys 0m0.342s 00:10:36.561 08:23:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:36.561 08:23:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 ************************************ 00:10:36.561 END TEST bdev_write_zeroes 00:10:36.561 ************************************ 00:10:36.561 08:23:23 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.561 08:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:10:36.561 08:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:36.561 08:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 ************************************ 00:10:36.561 START TEST bdev_json_nonenclosed 00:10:36.561 ************************************ 00:10:36.561 08:23:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.561 [2024-11-20 08:23:24.029443] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:36.561 [2024-11-20 08:23:24.029596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:10:36.829 [2024-11-20 08:23:24.207246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.829 [2024-11-20 08:23:24.347022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.829 [2024-11-20 08:23:24.347134] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:36.829 [2024-11-20 08:23:24.347159] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:36.829 [2024-11-20 08:23:24.347173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.088 00:10:37.088 real 0m0.692s 00:10:37.088 user 0m0.421s 00:10:37.088 sys 0m0.165s 00:10:37.088 08:23:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:37.088 08:23:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:37.088 ************************************ 00:10:37.088 END TEST bdev_json_nonenclosed 00:10:37.088 ************************************ 00:10:37.347 08:23:24 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.347 08:23:24 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:10:37.347 08:23:24 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:37.347 08:23:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:37.347 ************************************ 00:10:37.347 START TEST bdev_json_nonarray 00:10:37.347 ************************************ 00:10:37.347 08:23:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.347 [2024-11-20 08:23:24.792095] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:37.347 [2024-11-20 08:23:24.792211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63068 ] 00:10:37.607 [2024-11-20 08:23:24.973520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.607 [2024-11-20 08:23:25.100341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.607 [2024-11-20 08:23:25.100462] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:37.607 [2024-11-20 08:23:25.100486] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:37.607 [2024-11-20 08:23:25.100499] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.866 00:10:37.866 real 0m0.668s 00:10:37.866 user 0m0.411s 00:10:37.866 sys 0m0.152s 00:10:37.866 08:23:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:37.866 08:23:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:37.866 ************************************ 00:10:37.866 END TEST bdev_json_nonarray 00:10:37.866 ************************************ 00:10:38.125 08:23:25 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:10:38.125 08:23:25 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:10:38.125 08:23:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:38.125 08:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:38.125 08:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:38.125 08:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:38.125 ************************************ 00:10:38.125 START TEST bdev_gpt_uuid 00:10:38.125 ************************************ 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1132 -- # bdev_gpt_uuid 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63093 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63093 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # '[' -z 63093 ']' 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@843 -- # local max_retries=100 00:10:38.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@847 -- # xtrace_disable 00:10:38.125 08:23:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:38.125 [2024-11-20 08:23:25.556372] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:38.125 [2024-11-20 08:23:25.556501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63093 ] 00:10:38.384 [2024-11-20 08:23:25.737903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.384 [2024-11-20 08:23:25.882483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.322 08:23:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:10:39.322 08:23:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@871 -- # return 0 00:10:39.322 08:23:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:39.322 08:23:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:39.322 08:23:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 Some configs were skipped because the RPC state that can call them passed over. 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:10:39.890 { 00:10:39.890 "name": "Nvme1n1p1", 00:10:39.890 "aliases": [ 00:10:39.890 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:39.890 ], 00:10:39.890 "product_name": "GPT Disk", 00:10:39.890 "block_size": 4096, 00:10:39.890 "num_blocks": 655104, 00:10:39.890 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:39.890 "assigned_rate_limits": { 00:10:39.890 "rw_ios_per_sec": 0, 00:10:39.890 "rw_mbytes_per_sec": 0, 00:10:39.890 "r_mbytes_per_sec": 0, 00:10:39.890 "w_mbytes_per_sec": 0 00:10:39.890 }, 00:10:39.890 "claimed": false, 00:10:39.890 "zoned": false, 00:10:39.890 "supported_io_types": { 00:10:39.890 "read": true, 00:10:39.890 "write": true, 00:10:39.890 "unmap": true, 00:10:39.890 "flush": true, 00:10:39.890 "reset": true, 00:10:39.890 "nvme_admin": false, 00:10:39.890 "nvme_io": false, 00:10:39.890 "nvme_io_md": false, 00:10:39.890 "write_zeroes": true, 00:10:39.890 "zcopy": false, 00:10:39.890 "get_zone_info": false, 00:10:39.890 "zone_management": false, 00:10:39.890 "zone_append": false, 00:10:39.890 "compare": true, 00:10:39.890 "compare_and_write": false, 00:10:39.890 "abort": true, 00:10:39.890 "seek_hole": false, 00:10:39.890 "seek_data": false, 00:10:39.890 "copy": true, 00:10:39.890 "nvme_iov_md": false 00:10:39.890 }, 00:10:39.890 "driver_specific": { 00:10:39.890 "gpt": { 00:10:39.890 "base_bdev": "Nvme1n1", 00:10:39.890 "offset_blocks": 256, 00:10:39.890 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:39.890 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:39.890 "partition_name": "SPDK_TEST_first" 00:10:39.890 } 00:10:39.890 } 00:10:39.890 } 00:10:39.890 ]' 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:39.890 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:10:39.890 { 00:10:39.890 "name": "Nvme1n1p2", 00:10:39.890 "aliases": [ 00:10:39.890 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:39.890 ], 00:10:39.890 "product_name": "GPT Disk", 00:10:39.890 "block_size": 4096, 00:10:39.890 "num_blocks": 655103, 00:10:39.890 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:39.890 "assigned_rate_limits": { 00:10:39.890 "rw_ios_per_sec": 0, 00:10:39.890 "rw_mbytes_per_sec": 0, 00:10:39.890 "r_mbytes_per_sec": 0, 00:10:39.890 "w_mbytes_per_sec": 0 00:10:39.890 }, 00:10:39.890 "claimed": false, 00:10:39.890 "zoned": false, 00:10:39.890 "supported_io_types": { 00:10:39.890 "read": true, 00:10:39.890 "write": true, 00:10:39.890 "unmap": true, 00:10:39.890 "flush": true, 00:10:39.890 "reset": true, 00:10:39.890 "nvme_admin": false, 00:10:39.890 "nvme_io": false, 00:10:39.890 "nvme_io_md": false, 00:10:39.890 "write_zeroes": true, 00:10:39.890 "zcopy": false, 00:10:39.890 "get_zone_info": false, 00:10:39.890 "zone_management": false, 00:10:39.890 "zone_append": false, 00:10:39.890 "compare": true, 00:10:39.890 "compare_and_write": false, 00:10:39.890 "abort": true, 00:10:39.890 "seek_hole": false, 00:10:39.890 "seek_data": false, 00:10:39.890 "copy": true, 00:10:39.890 "nvme_iov_md": false 00:10:39.890 }, 00:10:39.890 "driver_specific": { 00:10:39.890 "gpt": { 00:10:39.890 "base_bdev": "Nvme1n1", 00:10:39.890 "offset_blocks": 655360, 00:10:39.890 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:39.890 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:39.890 "partition_name": "SPDK_TEST_second" 00:10:39.890 } 00:10:39.890 } 00:10:39.890 } 00:10:39.890 ]' 00:10:39.891 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:10:39.891 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:10:39.891 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63093 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' -z 63093 ']' 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@961 -- # kill -0 63093 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # uname 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 63093 00:10:40.148 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:10:40.149 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:10:40.149 killing process with pid 63093 00:10:40.149 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@975 -- # echo 'killing process with pid 63093' 00:10:40.149 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # kill 63093 00:10:40.149 08:23:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@981 -- # wait 63093 00:10:42.681 00:10:42.681 real 0m4.689s 00:10:42.681 user 0m4.636s 00:10:42.681 sys 0m0.686s 00:10:42.681 08:23:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:42.681 08:23:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:42.681 ************************************ 00:10:42.682 END TEST bdev_gpt_uuid 00:10:42.682 ************************************ 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:42.682 08:23:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.508 Waiting for block devices as requested 00:10:43.767 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.767 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.033 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.325 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:49.325 08:23:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:49.325 08:23:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:49.325 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:49.325 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:49.325 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:49.326 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:49.326 08:23:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:49.326 00:10:49.326 real 1m6.805s 00:10:49.326 user 1m22.137s 00:10:49.326 sys 0m12.880s 00:10:49.326 ************************************ 00:10:49.326 END TEST blockdev_nvme_gpt 00:10:49.326 ************************************ 00:10:49.326 08:23:36 blockdev_nvme_gpt -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:49.326 08:23:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:49.584 08:23:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:49.584 08:23:36 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:49.585 08:23:36 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:49.585 08:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:49.585 ************************************ 00:10:49.585 START TEST nvme 00:10:49.585 ************************************ 00:10:49.585 08:23:36 nvme -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:49.585 * Looking for test storage... 00:10:49.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:49.585 08:23:37 nvme -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:10:49.585 08:23:37 nvme -- common/autotest_common.sh@1638 -- # lcov --version 00:10:49.585 08:23:37 nvme -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:10:49.843 08:23:37 nvme -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:10:49.843 08:23:37 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.843 08:23:37 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.843 08:23:37 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.843 08:23:37 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.843 08:23:37 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.843 08:23:37 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.843 08:23:37 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.843 08:23:37 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.844 08:23:37 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.844 08:23:37 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.844 08:23:37 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:49.844 08:23:37 nvme -- scripts/common.sh@345 -- # : 1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.844 08:23:37 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.844 08:23:37 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@353 -- # local d=1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.844 08:23:37 nvme -- scripts/common.sh@355 -- # echo 1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.844 08:23:37 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:49.844 08:23:37 nvme -- scripts/common.sh@353 -- # local d=2 00:10:49.844 08:23:37 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.844 08:23:37 nvme -- scripts/common.sh@355 -- # echo 2 00:10:49.844 08:23:37 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.844 08:23:37 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.844 08:23:37 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.844 08:23:37 nvme -- scripts/common.sh@368 -- # return 0 00:10:49.844 08:23:37 nvme -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.844 08:23:37 nvme -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:10:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.844 --rc genhtml_branch_coverage=1 00:10:49.844 --rc genhtml_function_coverage=1 00:10:49.844 --rc genhtml_legend=1 00:10:49.844 --rc geninfo_all_blocks=1 00:10:49.844 --rc geninfo_unexecuted_blocks=1 00:10:49.844 00:10:49.844 ' 00:10:49.844 08:23:37 nvme -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:10:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.844 --rc genhtml_branch_coverage=1 00:10:49.844 --rc genhtml_function_coverage=1 00:10:49.844 --rc genhtml_legend=1 00:10:49.844 --rc geninfo_all_blocks=1 00:10:49.844 --rc geninfo_unexecuted_blocks=1 00:10:49.844 00:10:49.844 ' 00:10:49.844 08:23:37 nvme -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:10:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.844 --rc genhtml_branch_coverage=1 00:10:49.844 --rc genhtml_function_coverage=1 00:10:49.844 --rc genhtml_legend=1 00:10:49.844 --rc geninfo_all_blocks=1 00:10:49.844 --rc geninfo_unexecuted_blocks=1 00:10:49.844 00:10:49.844 ' 00:10:49.844 08:23:37 nvme -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:10:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.844 --rc genhtml_branch_coverage=1 00:10:49.844 --rc genhtml_function_coverage=1 00:10:49.844 --rc genhtml_legend=1 00:10:49.844 --rc geninfo_all_blocks=1 00:10:49.844 --rc geninfo_unexecuted_blocks=1 00:10:49.844 00:10:49.844 ' 00:10:49.844 08:23:37 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.348 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.348 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.348 08:23:38 nvme -- nvme/nvme.sh@79 -- # uname 00:10:51.348 08:23:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:51.348 08:23:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:51.348 08:23:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1089 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1075 -- # _randomize_va_space=2 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1076 -- # echo 0 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1078 -- # stubpid=63769 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1077 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1079 -- # echo Waiting for stub to ready for secondary processes... 00:10:51.348 Waiting for stub to ready for secondary processes... 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1080 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:51.348 08:23:38 nvme -- common/autotest_common.sh@1082 -- # [[ -e /proc/63769 ]] 00:10:51.349 08:23:38 nvme -- common/autotest_common.sh@1083 -- # sleep 1s 00:10:51.349 [2024-11-20 08:23:38.900938] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:10:51.349 [2024-11-20 08:23:38.901094] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:52.726 08:23:39 nvme -- common/autotest_common.sh@1080 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:52.726 08:23:39 nvme -- common/autotest_common.sh@1082 -- # [[ -e /proc/63769 ]] 00:10:52.726 08:23:39 nvme -- common/autotest_common.sh@1083 -- # sleep 1s 00:10:53.294 [2024-11-20 08:23:40.556599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.294 [2024-11-20 08:23:40.673014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.294 [2024-11-20 08:23:40.673151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.294 [2024-11-20 08:23:40.673172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.294 [2024-11-20 08:23:40.694719] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:53.294 [2024-11-20 08:23:40.694766] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:53.294 [2024-11-20 08:23:40.712129] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:53.294 [2024-11-20 08:23:40.712280] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:53.294 [2024-11-20 08:23:40.715270] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:53.294 [2024-11-20 08:23:40.716073] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:53.294 [2024-11-20 08:23:40.716172] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:53.294 [2024-11-20 08:23:40.719477] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:53.294 [2024-11-20 08:23:40.719681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:53.294 [2024-11-20 08:23:40.719779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:53.294 [2024-11-20 08:23:40.723583] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:53.294 [2024-11-20 08:23:40.723821] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:53.294 [2024-11-20 08:23:40.723896] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:53.294 [2024-11-20 08:23:40.723951] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:53.294 [2024-11-20 08:23:40.724012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:53.552 done. 00:10:53.552 08:23:40 nvme -- common/autotest_common.sh@1080 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:53.552 08:23:40 nvme -- common/autotest_common.sh@1085 -- # echo done. 00:10:53.552 08:23:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:53.552 08:23:40 nvme -- common/autotest_common.sh@1108 -- # '[' 10 -le 1 ']' 00:10:53.552 08:23:40 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:53.552 08:23:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.552 ************************************ 00:10:53.552 START TEST nvme_reset 00:10:53.552 ************************************ 00:10:53.552 08:23:40 nvme.nvme_reset -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:53.810 Initializing NVMe Controllers 00:10:53.810 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:53.810 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:53.810 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:53.810 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:53.810 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:53.810 ************************************ 00:10:53.810 END TEST nvme_reset 00:10:53.810 ************************************ 00:10:53.810 00:10:53.810 real 0m0.308s 00:10:53.811 user 0m0.095s 00:10:53.811 sys 0m0.170s 00:10:53.811 08:23:41 nvme.nvme_reset -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:53.811 08:23:41 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.811 08:23:41 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:53.811 08:23:41 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:53.811 08:23:41 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:53.811 08:23:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.811 ************************************ 00:10:53.811 START TEST nvme_identify 00:10:53.811 ************************************ 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1132 -- # nvme_identify 00:10:53.811 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:53.811 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:53.811 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:53.811 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1486 -- # bdfs=() 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1486 -- # local bdfs 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:10:53.811 08:23:41 nvme.nvme_identify -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:53.811 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:54.072 [2024-11-20 08:23:41.596337] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63802 terminated unexpected 00:10:54.072 ===================================================== 00:10:54.072 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:54.072 ===================================================== 00:10:54.072 Controller Capabilities/Features 00:10:54.072 ================================ 00:10:54.072 Vendor ID: 1b36 00:10:54.072 Subsystem Vendor ID: 1af4 00:10:54.072 Serial Number: 12340 00:10:54.072 Model Number: QEMU NVMe Ctrl 00:10:54.072 Firmware Version: 8.0.0 00:10:54.072 Recommended Arb Burst: 6 00:10:54.073 IEEE OUI Identifier: 00 54 52 00:10:54.073 Multi-path I/O 00:10:54.073 May have multiple subsystem ports: No 00:10:54.073 May have multiple controllers: No 00:10:54.073 Associated with SR-IOV VF: No 00:10:54.073 Max Data Transfer Size: 524288 00:10:54.073 Max Number of Namespaces: 256 00:10:54.073 Max Number of I/O Queues: 64 00:10:54.073 NVMe Specification Version (VS): 1.4 00:10:54.073 NVMe Specification Version (Identify): 1.4 00:10:54.073 Maximum Queue Entries: 2048 00:10:54.073 Contiguous Queues Required: Yes 00:10:54.073 Arbitration Mechanisms Supported 00:10:54.073 Weighted Round Robin: Not Supported 00:10:54.073 Vendor Specific: Not Supported 00:10:54.073 Reset Timeout: 7500 ms 00:10:54.073 Doorbell Stride: 4 bytes 00:10:54.073 NVM Subsystem Reset: Not Supported 00:10:54.073 Command Sets Supported 00:10:54.073 NVM Command Set: Supported 00:10:54.073 Boot Partition: Not Supported 00:10:54.073 Memory Page Size Minimum: 4096 bytes 00:10:54.073 Memory Page Size Maximum: 65536 bytes 00:10:54.073 Persistent Memory Region: Not Supported 00:10:54.073 Optional Asynchronous Events Supported 00:10:54.073 Namespace Attribute Notices: Supported 00:10:54.073 Firmware Activation Notices: Not Supported 00:10:54.073 ANA Change Notices: Not Supported 00:10:54.073 PLE Aggregate Log Change Notices: Not Supported 00:10:54.073 LBA Status Info Alert Notices: Not Supported 00:10:54.073 EGE Aggregate Log Change Notices: Not Supported 00:10:54.073 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.073 Zone Descriptor Change Notices: Not Supported 00:10:54.073 Discovery Log Change Notices: Not Supported 00:10:54.073 Controller Attributes 00:10:54.073 128-bit Host Identifier: Not Supported 00:10:54.073 Non-Operational Permissive Mode: Not Supported 00:10:54.073 NVM Sets: Not Supported 00:10:54.073 Read Recovery Levels: Not Supported 00:10:54.073 Endurance Groups: Not Supported 00:10:54.073 Predictable Latency Mode: Not Supported 00:10:54.073 Traffic Based Keep ALive: Not Supported 00:10:54.073 Namespace Granularity: Not Supported 00:10:54.073 SQ Associations: Not Supported 00:10:54.073 UUID List: Not Supported 00:10:54.073 Multi-Domain Subsystem: Not Supported 00:10:54.073 Fixed Capacity Management: Not Supported 00:10:54.073 Variable Capacity Management: Not Supported 00:10:54.073 Delete Endurance Group: Not Supported 00:10:54.073 Delete NVM Set: Not Supported 00:10:54.073 Extended LBA Formats Supported: Supported 00:10:54.073 Flexible Data Placement Supported: Not Supported 00:10:54.073 00:10:54.073 Controller Memory Buffer Support 00:10:54.073 ================================ 00:10:54.073 Supported: No 00:10:54.073 00:10:54.073 Persistent Memory Region Support 00:10:54.073 ================================ 00:10:54.073 Supported: No 00:10:54.073 00:10:54.073 Admin Command Set Attributes 00:10:54.073 ============================ 00:10:54.073 Security Send/Receive: Not Supported 00:10:54.073 Format NVM: Supported 00:10:54.073 Firmware Activate/Download: Not Supported 00:10:54.073 Namespace Management: Supported 00:10:54.073 Device Self-Test: Not Supported 00:10:54.073 Directives: Supported 00:10:54.073 NVMe-MI: Not Supported 00:10:54.073 Virtualization Management: Not Supported 00:10:54.073 Doorbell Buffer Config: Supported 00:10:54.073 Get LBA Status Capability: Not Supported 00:10:54.073 Command & Feature Lockdown Capability: Not Supported 00:10:54.073 Abort Command Limit: 4 00:10:54.073 Async Event Request Limit: 4 00:10:54.073 Number of Firmware Slots: N/A 00:10:54.073 Firmware Slot 1 Read-Only: N/A 00:10:54.073 Firmware Activation Without Reset: N/A 00:10:54.073 Multiple Update Detection Support: N/A 00:10:54.073 Firmware Update Granularity: No Information Provided 00:10:54.073 Per-Namespace SMART Log: Yes 00:10:54.073 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.073 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:54.073 Command Effects Log Page: Supported 00:10:54.073 Get Log Page Extended Data: Supported 00:10:54.073 Telemetry Log Pages: Not Supported 00:10:54.073 Persistent Event Log Pages: Not Supported 00:10:54.073 Supported Log Pages Log Page: May Support 00:10:54.073 Commands Supported & Effects Log Page: Not Supported 00:10:54.073 Feature Identifiers & Effects Log Page:May Support 00:10:54.073 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.073 Data Area 4 for Telemetry Log: Not Supported 00:10:54.073 Error Log Page Entries Supported: 1 00:10:54.073 Keep Alive: Not Supported 00:10:54.073 00:10:54.073 NVM Command Set Attributes 00:10:54.073 ========================== 00:10:54.073 Submission Queue Entry Size 00:10:54.073 Max: 64 00:10:54.073 Min: 64 00:10:54.073 Completion Queue Entry Size 00:10:54.073 Max: 16 00:10:54.073 Min: 16 00:10:54.073 Number of Namespaces: 256 00:10:54.073 Compare Command: Supported 00:10:54.073 Write Uncorrectable Command: Not Supported 00:10:54.073 Dataset Management Command: Supported 00:10:54.073 Write Zeroes Command: Supported 00:10:54.073 Set Features Save Field: Supported 00:10:54.073 Reservations: Not Supported 00:10:54.073 Timestamp: Supported 00:10:54.073 Copy: Supported 00:10:54.073 Volatile Write Cache: Present 00:10:54.073 Atomic Write Unit (Normal): 1 00:10:54.073 Atomic Write Unit (PFail): 1 00:10:54.073 Atomic Compare & Write Unit: 1 00:10:54.073 Fused Compare & Write: Not Supported 00:10:54.073 Scatter-Gather List 00:10:54.073 SGL Command Set: Supported 00:10:54.073 SGL Keyed: Not Supported 00:10:54.073 SGL Bit Bucket Descriptor: Not Supported 00:10:54.073 SGL Metadata Pointer: Not Supported 00:10:54.073 Oversized SGL: Not Supported 00:10:54.073 SGL Metadata Address: Not Supported 00:10:54.073 SGL Offset: Not Supported 00:10:54.073 Transport SGL Data Block: Not Supported 00:10:54.073 Replay Protected Memory Block: Not Supported 00:10:54.073 00:10:54.073 Firmware Slot Information 00:10:54.073 ========================= 00:10:54.073 Active slot: 1 00:10:54.073 Slot 1 Firmware Revision: 1.0 00:10:54.073 00:10:54.073 00:10:54.073 Commands Supported and Effects 00:10:54.073 ============================== 00:10:54.073 Admin Commands 00:10:54.073 -------------- 00:10:54.073 Delete I/O Submission Queue (00h): Supported 00:10:54.073 Create I/O Submission Queue (01h): Supported 00:10:54.073 Get Log Page (02h): Supported 00:10:54.073 Delete I/O Completion Queue (04h): Supported 00:10:54.073 Create I/O Completion Queue (05h): Supported 00:10:54.073 Identify (06h): Supported 00:10:54.073 Abort (08h): Supported 00:10:54.073 Set Features (09h): Supported 00:10:54.073 Get Features (0Ah): Supported 00:10:54.073 Asynchronous Event Request (0Ch): Supported 00:10:54.073 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.073 Directive Send (19h): Supported 00:10:54.073 Directive Receive (1Ah): Supported 00:10:54.073 Virtualization Management (1Ch): Supported 00:10:54.073 Doorbell Buffer Config (7Ch): Supported 00:10:54.073 Format NVM (80h): Supported LBA-Change 00:10:54.073 I/O Commands 00:10:54.073 ------------ 00:10:54.073 Flush (00h): Supported LBA-Change 00:10:54.073 Write (01h): Supported LBA-Change 00:10:54.073 Read (02h): Supported 00:10:54.073 Compare (05h): Supported 00:10:54.073 Write Zeroes (08h): Supported LBA-Change 00:10:54.073 Dataset Management (09h): Supported LBA-Change 00:10:54.073 Unknown (0Ch): Supported 00:10:54.073 Unknown (12h): Supported 00:10:54.073 Copy (19h): Supported LBA-Change 00:10:54.073 Unknown (1Dh): Supported LBA-Change 00:10:54.073 00:10:54.073 Error Log 00:10:54.073 ========= 00:10:54.073 00:10:54.073 Arbitration 00:10:54.073 =========== 00:10:54.073 Arbitration Burst: no limit 00:10:54.073 00:10:54.073 Power Management 00:10:54.073 ================ 00:10:54.073 Number of Power States: 1 00:10:54.073 Current Power State: Power State #0 00:10:54.073 Power State #0: 00:10:54.073 Max Power: 25.00 W 00:10:54.073 Non-Operational State: Operational 00:10:54.073 Entry Latency: 16 microseconds 00:10:54.073 Exit Latency: 4 microseconds 00:10:54.073 Relative Read Throughput: 0 00:10:54.073 Relative Read Latency: 0 00:10:54.073 Relative Write Throughput: 0 00:10:54.073 Relative Write Latency: 0 00:10:54.073 Idle Power[2024-11-20 08:23:41.597778] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63802 terminated unexpected 00:10:54.073 : Not Reported 00:10:54.073 Active Power: Not Reported 00:10:54.073 Non-Operational Permissive Mode: Not Supported 00:10:54.073 00:10:54.073 Health Information 00:10:54.073 ================== 00:10:54.073 Critical Warnings: 00:10:54.074 Available Spare Space: OK 00:10:54.074 Temperature: OK 00:10:54.074 Device Reliability: OK 00:10:54.074 Read Only: No 00:10:54.074 Volatile Memory Backup: OK 00:10:54.074 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.074 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.074 Available Spare: 0% 00:10:54.074 Available Spare Threshold: 0% 00:10:54.074 Life Percentage Used: 0% 00:10:54.074 Data Units Read: 714 00:10:54.074 Data Units Written: 647 00:10:54.074 Host Read Commands: 35244 00:10:54.074 Host Write Commands: 35118 00:10:54.074 Controller Busy Time: 0 minutes 00:10:54.074 Power Cycles: 0 00:10:54.074 Power On Hours: 0 hours 00:10:54.074 Unsafe Shutdowns: 0 00:10:54.074 Unrecoverable Media Errors: 0 00:10:54.074 Lifetime Error Log Entries: 0 00:10:54.074 Warning Temperature Time: 0 minutes 00:10:54.074 Critical Temperature Time: 0 minutes 00:10:54.074 00:10:54.074 Number of Queues 00:10:54.074 ================ 00:10:54.074 Number of I/O Submission Queues: 64 00:10:54.074 Number of I/O Completion Queues: 64 00:10:54.074 00:10:54.074 ZNS Specific Controller Data 00:10:54.074 ============================ 00:10:54.074 Zone Append Size Limit: 0 00:10:54.074 00:10:54.074 00:10:54.074 Active Namespaces 00:10:54.074 ================= 00:10:54.074 Namespace ID:1 00:10:54.074 Error Recovery Timeout: Unlimited 00:10:54.074 Command Set Identifier: NVM (00h) 00:10:54.074 Deallocate: Supported 00:10:54.074 Deallocated/Unwritten Error: Supported 00:10:54.074 Deallocated Read Value: All 0x00 00:10:54.074 Deallocate in Write Zeroes: Not Supported 00:10:54.074 Deallocated Guard Field: 0xFFFF 00:10:54.074 Flush: Supported 00:10:54.074 Reservation: Not Supported 00:10:54.074 Metadata Transferred as: Separate Metadata Buffer 00:10:54.074 Namespace Sharing Capabilities: Private 00:10:54.074 Size (in LBAs): 1548666 (5GiB) 00:10:54.074 Capacity (in LBAs): 1548666 (5GiB) 00:10:54.074 Utilization (in LBAs): 1548666 (5GiB) 00:10:54.074 Thin Provisioning: Not Supported 00:10:54.074 Per-NS Atomic Units: No 00:10:54.074 Maximum Single Source Range Length: 128 00:10:54.074 Maximum Copy Length: 128 00:10:54.074 Maximum Source Range Count: 128 00:10:54.074 NGUID/EUI64 Never Reused: No 00:10:54.074 Namespace Write Protected: No 00:10:54.074 Number of LBA Formats: 8 00:10:54.074 Current LBA Format: LBA Format #07 00:10:54.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.074 00:10:54.074 NVM Specific Namespace Data 00:10:54.074 =========================== 00:10:54.074 Logical Block Storage Tag Mask: 0 00:10:54.074 Protection Information Capabilities: 00:10:54.074 16b Guard Protection Information Storage Tag Support: No 00:10:54.074 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.074 Storage Tag Check Read Support: No 00:10:54.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.074 ===================================================== 00:10:54.074 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:54.074 ===================================================== 00:10:54.074 Controller Capabilities/Features 00:10:54.074 ================================ 00:10:54.074 Vendor ID: 1b36 00:10:54.074 Subsystem Vendor ID: 1af4 00:10:54.074 Serial Number: 12341 00:10:54.074 Model Number: QEMU NVMe Ctrl 00:10:54.074 Firmware Version: 8.0.0 00:10:54.074 Recommended Arb Burst: 6 00:10:54.074 IEEE OUI Identifier: 00 54 52 00:10:54.074 Multi-path I/O 00:10:54.074 May have multiple subsystem ports: No 00:10:54.074 May have multiple controllers: No 00:10:54.074 Associated with SR-IOV VF: No 00:10:54.074 Max Data Transfer Size: 524288 00:10:54.074 Max Number of Namespaces: 256 00:10:54.074 Max Number of I/O Queues: 64 00:10:54.074 NVMe Specification Version (VS): 1.4 00:10:54.074 NVMe Specification Version (Identify): 1.4 00:10:54.074 Maximum Queue Entries: 2048 00:10:54.074 Contiguous Queues Required: Yes 00:10:54.074 Arbitration Mechanisms Supported 00:10:54.074 Weighted Round Robin: Not Supported 00:10:54.074 Vendor Specific: Not Supported 00:10:54.074 Reset Timeout: 7500 ms 00:10:54.074 Doorbell Stride: 4 bytes 00:10:54.074 NVM Subsystem Reset: Not Supported 00:10:54.074 Command Sets Supported 00:10:54.074 NVM Command Set: Supported 00:10:54.074 Boot Partition: Not Supported 00:10:54.074 Memory Page Size Minimum: 4096 bytes 00:10:54.074 Memory Page Size Maximum: 65536 bytes 00:10:54.074 Persistent Memory Region: Not Supported 00:10:54.074 Optional Asynchronous Events Supported 00:10:54.074 Namespace Attribute Notices: Supported 00:10:54.074 Firmware Activation Notices: Not Supported 00:10:54.074 ANA Change Notices: Not Supported 00:10:54.074 PLE Aggregate Log Change Notices: Not Supported 00:10:54.074 LBA Status Info Alert Notices: Not Supported 00:10:54.074 EGE Aggregate Log Change Notices: Not Supported 00:10:54.074 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.074 Zone Descriptor Change Notices: Not Supported 00:10:54.074 Discovery Log Change Notices: Not Supported 00:10:54.074 Controller Attributes 00:10:54.074 128-bit Host Identifier: Not Supported 00:10:54.074 Non-Operational Permissive Mode: Not Supported 00:10:54.074 NVM Sets: Not Supported 00:10:54.074 Read Recovery Levels: Not Supported 00:10:54.074 Endurance Groups: Not Supported 00:10:54.074 Predictable Latency Mode: Not Supported 00:10:54.074 Traffic Based Keep ALive: Not Supported 00:10:54.074 Namespace Granularity: Not Supported 00:10:54.074 SQ Associations: Not Supported 00:10:54.074 UUID List: Not Supported 00:10:54.074 Multi-Domain Subsystem: Not Supported 00:10:54.074 Fixed Capacity Management: Not Supported 00:10:54.074 Variable Capacity Management: Not Supported 00:10:54.074 Delete Endurance Group: Not Supported 00:10:54.074 Delete NVM Set: Not Supported 00:10:54.074 Extended LBA Formats Supported: Supported 00:10:54.074 Flexible Data Placement Supported: Not Supported 00:10:54.074 00:10:54.074 Controller Memory Buffer Support 00:10:54.074 ================================ 00:10:54.074 Supported: No 00:10:54.074 00:10:54.074 Persistent Memory Region Support 00:10:54.074 ================================ 00:10:54.074 Supported: No 00:10:54.074 00:10:54.074 Admin Command Set Attributes 00:10:54.074 ============================ 00:10:54.074 Security Send/Receive: Not Supported 00:10:54.074 Format NVM: Supported 00:10:54.074 Firmware Activate/Download: Not Supported 00:10:54.074 Namespace Management: Supported 00:10:54.074 Device Self-Test: Not Supported 00:10:54.074 Directives: Supported 00:10:54.074 NVMe-MI: Not Supported 00:10:54.074 Virtualization Management: Not Supported 00:10:54.074 Doorbell Buffer Config: Supported 00:10:54.074 Get LBA Status Capability: Not Supported 00:10:54.074 Command & Feature Lockdown Capability: Not Supported 00:10:54.074 Abort Command Limit: 4 00:10:54.074 Async Event Request Limit: 4 00:10:54.074 Number of Firmware Slots: N/A 00:10:54.074 Firmware Slot 1 Read-Only: N/A 00:10:54.074 Firmware Activation Without Reset: N/A 00:10:54.074 Multiple Update Detection Support: N/A 00:10:54.074 Firmware Update Granularity: No Information Provided 00:10:54.074 Per-Namespace SMART Log: Yes 00:10:54.074 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.074 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:54.074 Command Effects Log Page: Supported 00:10:54.074 Get Log Page Extended Data: Supported 00:10:54.074 Telemetry Log Pages: Not Supported 00:10:54.074 Persistent Event Log Pages: Not Supported 00:10:54.074 Supported Log Pages Log Page: May Support 00:10:54.074 Commands Supported & Effects Log Page: Not Supported 00:10:54.074 Feature Identifiers & Effects Log Page:May Support 00:10:54.074 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.074 Data Area 4 for Telemetry Log: Not Supported 00:10:54.074 Error Log Page Entries Supported: 1 00:10:54.074 Keep Alive: Not Supported 00:10:54.074 00:10:54.074 NVM Command Set Attributes 00:10:54.075 ========================== 00:10:54.075 Submission Queue Entry Size 00:10:54.075 Max: 64 00:10:54.075 Min: 64 00:10:54.075 Completion Queue Entry Size 00:10:54.075 Max: 16 00:10:54.075 Min: 16 00:10:54.075 Number of Namespaces: 256 00:10:54.075 Compare Command: Supported 00:10:54.075 Write Uncorrectable Command: Not Supported 00:10:54.075 Dataset Management Command: Supported 00:10:54.075 Write Zeroes Command: Supported 00:10:54.075 Set Features Save Field: Supported 00:10:54.075 Reservations: Not Supported 00:10:54.075 Timestamp: Supported 00:10:54.075 Copy: Supported 00:10:54.075 Volatile Write Cache: Present 00:10:54.075 Atomic Write Unit (Normal): 1 00:10:54.075 Atomic Write Unit (PFail): 1 00:10:54.075 Atomic Compare & Write Unit: 1 00:10:54.075 Fused Compare & Write: Not Supported 00:10:54.075 Scatter-Gather List 00:10:54.075 SGL Command Set: Supported 00:10:54.075 SGL Keyed: Not Supported 00:10:54.075 SGL Bit Bucket Descriptor: Not Supported 00:10:54.075 SGL Metadata Pointer: Not Supported 00:10:54.075 Oversized SGL: Not Supported 00:10:54.075 SGL Metadata Address: Not Supported 00:10:54.075 SGL Offset: Not Supported 00:10:54.075 Transport SGL Data Block: Not Supported 00:10:54.075 Replay Protected Memory Block: Not Supported 00:10:54.075 00:10:54.075 Firmware Slot Information 00:10:54.075 ========================= 00:10:54.075 Active slot: 1 00:10:54.075 Slot 1 Firmware Revision: 1.0 00:10:54.075 00:10:54.075 00:10:54.075 Commands Supported and Effects 00:10:54.075 ============================== 00:10:54.075 Admin Commands 00:10:54.075 -------------- 00:10:54.075 Delete I/O Submission Queue (00h): Supported 00:10:54.075 Create I/O Submission Queue (01h): Supported 00:10:54.075 Get Log Page (02h): Supported 00:10:54.075 Delete I/O Completion Queue (04h): Supported 00:10:54.075 Create I/O Completion Queue (05h): Supported 00:10:54.075 Identify (06h): Supported 00:10:54.075 Abort (08h): Supported 00:10:54.075 Set Features (09h): Supported 00:10:54.075 Get Features (0Ah): Supported 00:10:54.075 Asynchronous Event Request (0Ch): Supported 00:10:54.075 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.075 Directive Send (19h): Supported 00:10:54.075 Directive Receive (1Ah): Supported 00:10:54.075 Virtualization Management (1Ch): Supported 00:10:54.075 Doorbell Buffer Config (7Ch): Supported 00:10:54.075 Format NVM (80h): Supported LBA-Change 00:10:54.075 I/O Commands 00:10:54.075 ------------ 00:10:54.075 Flush (00h): Supported LBA-Change 00:10:54.075 Write (01h): Supported LBA-Change 00:10:54.075 Read (02h): Supported 00:10:54.075 Compare (05h): Supported 00:10:54.075 Write Zeroes (08h): Supported LBA-Change 00:10:54.075 Dataset Management (09h): Supported LBA-Change 00:10:54.075 Unknown (0Ch): Supported 00:10:54.075 Unknown (12h): Supported 00:10:54.075 Copy (19h): Supported LBA-Change 00:10:54.075 Unknown (1Dh): Supported LBA-Change 00:10:54.075 00:10:54.075 Error Log 00:10:54.075 ========= 00:10:54.075 00:10:54.075 Arbitration 00:10:54.075 =========== 00:10:54.075 Arbitration Burst: no limit 00:10:54.075 00:10:54.075 Power Management 00:10:54.075 ================ 00:10:54.075 Number of Power States: 1 00:10:54.075 Current Power State: Power State #0 00:10:54.075 Power State #0: 00:10:54.075 Max Power: 25.00 W 00:10:54.075 Non-Operational State: Operational 00:10:54.075 Entry Latency: 16 microseconds 00:10:54.075 Exit Latency: 4 microseconds 00:10:54.075 Relative Read Throughput: 0 00:10:54.075 Relative Read Latency: 0 00:10:54.075 Relative Write Throughput: 0 00:10:54.075 Relative Write Latency: 0 00:10:54.075 Idle Power: Not Reported 00:10:54.075 Active Power: Not Reported 00:10:54.075 Non-Operational Permissive Mode: Not Supported 00:10:54.075 00:10:54.075 Health Information 00:10:54.075 ================== 00:10:54.075 Critical Warnings: 00:10:54.075 Available Spare Space: OK 00:10:54.075 Temperature: [2024-11-20 08:23:41.598602] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63802 terminated unexpected 00:10:54.075 OK 00:10:54.075 Device Reliability: OK 00:10:54.075 Read Only: No 00:10:54.075 Volatile Memory Backup: OK 00:10:54.075 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.075 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.075 Available Spare: 0% 00:10:54.075 Available Spare Threshold: 0% 00:10:54.075 Life Percentage Used: 0% 00:10:54.075 Data Units Read: 1156 00:10:54.075 Data Units Written: 1020 00:10:54.075 Host Read Commands: 53352 00:10:54.075 Host Write Commands: 52139 00:10:54.075 Controller Busy Time: 0 minutes 00:10:54.075 Power Cycles: 0 00:10:54.075 Power On Hours: 0 hours 00:10:54.075 Unsafe Shutdowns: 0 00:10:54.075 Unrecoverable Media Errors: 0 00:10:54.075 Lifetime Error Log Entries: 0 00:10:54.075 Warning Temperature Time: 0 minutes 00:10:54.075 Critical Temperature Time: 0 minutes 00:10:54.075 00:10:54.075 Number of Queues 00:10:54.075 ================ 00:10:54.075 Number of I/O Submission Queues: 64 00:10:54.075 Number of I/O Completion Queues: 64 00:10:54.075 00:10:54.075 ZNS Specific Controller Data 00:10:54.075 ============================ 00:10:54.075 Zone Append Size Limit: 0 00:10:54.075 00:10:54.075 00:10:54.075 Active Namespaces 00:10:54.075 ================= 00:10:54.075 Namespace ID:1 00:10:54.075 Error Recovery Timeout: Unlimited 00:10:54.075 Command Set Identifier: NVM (00h) 00:10:54.075 Deallocate: Supported 00:10:54.075 Deallocated/Unwritten Error: Supported 00:10:54.075 Deallocated Read Value: All 0x00 00:10:54.075 Deallocate in Write Zeroes: Not Supported 00:10:54.075 Deallocated Guard Field: 0xFFFF 00:10:54.075 Flush: Supported 00:10:54.075 Reservation: Not Supported 00:10:54.075 Namespace Sharing Capabilities: Private 00:10:54.075 Size (in LBAs): 1310720 (5GiB) 00:10:54.075 Capacity (in LBAs): 1310720 (5GiB) 00:10:54.075 Utilization (in LBAs): 1310720 (5GiB) 00:10:54.075 Thin Provisioning: Not Supported 00:10:54.075 Per-NS Atomic Units: No 00:10:54.075 Maximum Single Source Range Length: 128 00:10:54.075 Maximum Copy Length: 128 00:10:54.075 Maximum Source Range Count: 128 00:10:54.075 NGUID/EUI64 Never Reused: No 00:10:54.075 Namespace Write Protected: No 00:10:54.075 Number of LBA Formats: 8 00:10:54.075 Current LBA Format: LBA Format #04 00:10:54.075 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.075 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.075 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.075 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.075 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.075 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.075 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.075 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.075 00:10:54.075 NVM Specific Namespace Data 00:10:54.075 =========================== 00:10:54.075 Logical Block Storage Tag Mask: 0 00:10:54.075 Protection Information Capabilities: 00:10:54.075 16b Guard Protection Information Storage Tag Support: No 00:10:54.075 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.075 Storage Tag Check Read Support: No 00:10:54.075 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.075 ===================================================== 00:10:54.075 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:54.075 ===================================================== 00:10:54.075 Controller Capabilities/Features 00:10:54.075 ================================ 00:10:54.075 Vendor ID: 1b36 00:10:54.075 Subsystem Vendor ID: 1af4 00:10:54.075 Serial Number: 12343 00:10:54.075 Model Number: QEMU NVMe Ctrl 00:10:54.075 Firmware Version: 8.0.0 00:10:54.075 Recommended Arb Burst: 6 00:10:54.075 IEEE OUI Identifier: 00 54 52 00:10:54.075 Multi-path I/O 00:10:54.075 May have multiple subsystem ports: No 00:10:54.075 May have multiple controllers: Yes 00:10:54.075 Associated with SR-IOV VF: No 00:10:54.075 Max Data Transfer Size: 524288 00:10:54.075 Max Number of Namespaces: 256 00:10:54.075 Max Number of I/O Queues: 64 00:10:54.075 NVMe Specification Version (VS): 1.4 00:10:54.075 NVMe Specification Version (Identify): 1.4 00:10:54.075 Maximum Queue Entries: 2048 00:10:54.076 Contiguous Queues Required: Yes 00:10:54.076 Arbitration Mechanisms Supported 00:10:54.076 Weighted Round Robin: Not Supported 00:10:54.076 Vendor Specific: Not Supported 00:10:54.076 Reset Timeout: 7500 ms 00:10:54.076 Doorbell Stride: 4 bytes 00:10:54.076 NVM Subsystem Reset: Not Supported 00:10:54.076 Command Sets Supported 00:10:54.076 NVM Command Set: Supported 00:10:54.076 Boot Partition: Not Supported 00:10:54.076 Memory Page Size Minimum: 4096 bytes 00:10:54.076 Memory Page Size Maximum: 65536 bytes 00:10:54.076 Persistent Memory Region: Not Supported 00:10:54.076 Optional Asynchronous Events Supported 00:10:54.076 Namespace Attribute Notices: Supported 00:10:54.076 Firmware Activation Notices: Not Supported 00:10:54.076 ANA Change Notices: Not Supported 00:10:54.076 PLE Aggregate Log Change Notices: Not Supported 00:10:54.076 LBA Status Info Alert Notices: Not Supported 00:10:54.076 EGE Aggregate Log Change Notices: Not Supported 00:10:54.076 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.076 Zone Descriptor Change Notices: Not Supported 00:10:54.076 Discovery Log Change Notices: Not Supported 00:10:54.076 Controller Attributes 00:10:54.076 128-bit Host Identifier: Not Supported 00:10:54.076 Non-Operational Permissive Mode: Not Supported 00:10:54.076 NVM Sets: Not Supported 00:10:54.076 Read Recovery Levels: Not Supported 00:10:54.076 Endurance Groups: Supported 00:10:54.076 Predictable Latency Mode: Not Supported 00:10:54.076 Traffic Based Keep ALive: Not Supported 00:10:54.076 Namespace Granularity: Not Supported 00:10:54.076 SQ Associations: Not Supported 00:10:54.076 UUID List: Not Supported 00:10:54.076 Multi-Domain Subsystem: Not Supported 00:10:54.076 Fixed Capacity Management: Not Supported 00:10:54.076 Variable Capacity Management: Not Supported 00:10:54.076 Delete Endurance Group: Not Supported 00:10:54.076 Delete NVM Set: Not Supported 00:10:54.076 Extended LBA Formats Supported: Supported 00:10:54.076 Flexible Data Placement Supported: Supported 00:10:54.076 00:10:54.076 Controller Memory Buffer Support 00:10:54.076 ================================ 00:10:54.076 Supported: No 00:10:54.076 00:10:54.076 Persistent Memory Region Support 00:10:54.076 ================================ 00:10:54.076 Supported: No 00:10:54.076 00:10:54.076 Admin Command Set Attributes 00:10:54.076 ============================ 00:10:54.076 Security Send/Receive: Not Supported 00:10:54.076 Format NVM: Supported 00:10:54.076 Firmware Activate/Download: Not Supported 00:10:54.076 Namespace Management: Supported 00:10:54.076 Device Self-Test: Not Supported 00:10:54.076 Directives: Supported 00:10:54.076 NVMe-MI: Not Supported 00:10:54.076 Virtualization Management: Not Supported 00:10:54.076 Doorbell Buffer Config: Supported 00:10:54.076 Get LBA Status Capability: Not Supported 00:10:54.076 Command & Feature Lockdown Capability: Not Supported 00:10:54.076 Abort Command Limit: 4 00:10:54.076 Async Event Request Limit: 4 00:10:54.076 Number of Firmware Slots: N/A 00:10:54.076 Firmware Slot 1 Read-Only: N/A 00:10:54.076 Firmware Activation Without Reset: N/A 00:10:54.076 Multiple Update Detection Support: N/A 00:10:54.076 Firmware Update Granularity: No Information Provided 00:10:54.076 Per-Namespace SMART Log: Yes 00:10:54.076 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.076 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:54.076 Command Effects Log Page: Supported 00:10:54.076 Get Log Page Extended Data: Supported 00:10:54.076 Telemetry Log Pages: Not Supported 00:10:54.076 Persistent Event Log Pages: Not Supported 00:10:54.076 Supported Log Pages Log Page: May Support 00:10:54.076 Commands Supported & Effects Log Page: Not Supported 00:10:54.076 Feature Identifiers & Effects Log Page:May Support 00:10:54.076 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.076 Data Area 4 for Telemetry Log: Not Supported 00:10:54.076 Error Log Page Entries Supported: 1 00:10:54.076 Keep Alive: Not Supported 00:10:54.076 00:10:54.076 NVM Command Set Attributes 00:10:54.076 ========================== 00:10:54.076 Submission Queue Entry Size 00:10:54.076 Max: 64 00:10:54.076 Min: 64 00:10:54.076 Completion Queue Entry Size 00:10:54.076 Max: 16 00:10:54.076 Min: 16 00:10:54.076 Number of Namespaces: 256 00:10:54.076 Compare Command: Supported 00:10:54.076 Write Uncorrectable Command: Not Supported 00:10:54.076 Dataset Management Command: Supported 00:10:54.076 Write Zeroes Command: Supported 00:10:54.076 Set Features Save Field: Supported 00:10:54.076 Reservations: Not Supported 00:10:54.076 Timestamp: Supported 00:10:54.076 Copy: Supported 00:10:54.076 Volatile Write Cache: Present 00:10:54.076 Atomic Write Unit (Normal): 1 00:10:54.076 Atomic Write Unit (PFail): 1 00:10:54.076 Atomic Compare & Write Unit: 1 00:10:54.076 Fused Compare & Write: Not Supported 00:10:54.076 Scatter-Gather List 00:10:54.076 SGL Command Set: Supported 00:10:54.076 SGL Keyed: Not Supported 00:10:54.076 SGL Bit Bucket Descriptor: Not Supported 00:10:54.076 SGL Metadata Pointer: Not Supported 00:10:54.076 Oversized SGL: Not Supported 00:10:54.076 SGL Metadata Address: Not Supported 00:10:54.076 SGL Offset: Not Supported 00:10:54.076 Transport SGL Data Block: Not Supported 00:10:54.076 Replay Protected Memory Block: Not Supported 00:10:54.076 00:10:54.076 Firmware Slot Information 00:10:54.076 ========================= 00:10:54.076 Active slot: 1 00:10:54.076 Slot 1 Firmware Revision: 1.0 00:10:54.076 00:10:54.076 00:10:54.076 Commands Supported and Effects 00:10:54.076 ============================== 00:10:54.076 Admin Commands 00:10:54.076 -------------- 00:10:54.076 Delete I/O Submission Queue (00h): Supported 00:10:54.076 Create I/O Submission Queue (01h): Supported 00:10:54.076 Get Log Page (02h): Supported 00:10:54.076 Delete I/O Completion Queue (04h): Supported 00:10:54.076 Create I/O Completion Queue (05h): Supported 00:10:54.076 Identify (06h): Supported 00:10:54.076 Abort (08h): Supported 00:10:54.076 Set Features (09h): Supported 00:10:54.076 Get Features (0Ah): Supported 00:10:54.076 Asynchronous Event Request (0Ch): Supported 00:10:54.076 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.076 Directive Send (19h): Supported 00:10:54.076 Directive Receive (1Ah): Supported 00:10:54.076 Virtualization Management (1Ch): Supported 00:10:54.076 Doorbell Buffer Config (7Ch): Supported 00:10:54.076 Format NVM (80h): Supported LBA-Change 00:10:54.076 I/O Commands 00:10:54.076 ------------ 00:10:54.076 Flush (00h): Supported LBA-Change 00:10:54.076 Write (01h): Supported LBA-Change 00:10:54.076 Read (02h): Supported 00:10:54.076 Compare (05h): Supported 00:10:54.076 Write Zeroes (08h): Supported LBA-Change 00:10:54.076 Dataset Management (09h): Supported LBA-Change 00:10:54.076 Unknown (0Ch): Supported 00:10:54.076 Unknown (12h): Supported 00:10:54.076 Copy (19h): Supported LBA-Change 00:10:54.076 Unknown (1Dh): Supported LBA-Change 00:10:54.076 00:10:54.076 Error Log 00:10:54.076 ========= 00:10:54.076 00:10:54.076 Arbitration 00:10:54.076 =========== 00:10:54.076 Arbitration Burst: no limit 00:10:54.076 00:10:54.076 Power Management 00:10:54.076 ================ 00:10:54.076 Number of Power States: 1 00:10:54.076 Current Power State: Power State #0 00:10:54.076 Power State #0: 00:10:54.076 Max Power: 25.00 W 00:10:54.076 Non-Operational State: Operational 00:10:54.076 Entry Latency: 16 microseconds 00:10:54.076 Exit Latency: 4 microseconds 00:10:54.076 Relative Read Throughput: 0 00:10:54.076 Relative Read Latency: 0 00:10:54.076 Relative Write Throughput: 0 00:10:54.076 Relative Write Latency: 0 00:10:54.076 Idle Power: Not Reported 00:10:54.076 Active Power: Not Reported 00:10:54.076 Non-Operational Permissive Mode: Not Supported 00:10:54.076 00:10:54.076 Health Information 00:10:54.076 ================== 00:10:54.076 Critical Warnings: 00:10:54.076 Available Spare Space: OK 00:10:54.076 Temperature: OK 00:10:54.076 Device Reliability: OK 00:10:54.076 Read Only: No 00:10:54.076 Volatile Memory Backup: OK 00:10:54.076 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.076 Available Spare: 0% 00:10:54.076 Available Spare Threshold: 0% 00:10:54.076 Life Percentage Used: 0% 00:10:54.076 Data Units Read: 832 00:10:54.076 Data Units Written: 765 00:10:54.076 Host Read Commands: 36690 00:10:54.076 Host Write Commands: 36211 00:10:54.077 Controller Busy Time: 0 minutes 00:10:54.077 Power Cycles: 0 00:10:54.077 Power On Hours: 0 hours 00:10:54.077 Unsafe Shutdowns: 0 00:10:54.077 Unrecoverable Media Errors: 0 00:10:54.077 Lifetime Error Log Entries: 0 00:10:54.077 Warning Temperature Time: 0 minutes 00:10:54.077 Critical Temperature Time: 0 minutes 00:10:54.077 00:10:54.077 Number of Queues 00:10:54.077 ================ 00:10:54.077 Number of I/O Submission Queues: 64 00:10:54.077 Number of I/O Completion Queues: 64 00:10:54.077 00:10:54.077 ZNS Specific Controller Data 00:10:54.077 ============================ 00:10:54.077 Zone Append Size Limit: 0 00:10:54.077 00:10:54.077 00:10:54.077 Active Namespaces 00:10:54.077 ================= 00:10:54.077 Namespace ID:1 00:10:54.077 Error Recovery Timeout: Unlimited 00:10:54.077 Command Set Identifier: NVM (00h) 00:10:54.077 Deallocate: Supported 00:10:54.077 Deallocated/Unwritten Error: Supported 00:10:54.077 Deallocated Read Value: All 0x00 00:10:54.077 Deallocate in Write Zeroes: Not Supported 00:10:54.077 Deallocated Guard Field: 0xFFFF 00:10:54.077 Flush: Supported 00:10:54.077 Reservation: Not Supported 00:10:54.077 Namespace Sharing Capabilities: Multiple Controllers 00:10:54.077 Size (in LBAs): 262144 (1GiB) 00:10:54.077 Capacity (in LBAs): 262144 (1GiB) 00:10:54.077 Utilization (in LBAs): 262144 (1GiB) 00:10:54.077 Thin Provisioning: Not Supported 00:10:54.077 Per-NS Atomic Units: No 00:10:54.077 Maximum Single Source Range Length: 128 00:10:54.077 Maximum Copy Length: 128 00:10:54.077 Maximum Source Range Count: 128 00:10:54.077 NGUID/EUI64 Never Reused: No 00:10:54.077 Namespace Write Protected: No 00:10:54.077 Endurance group ID: 1 00:10:54.077 Number of LBA Formats: 8 00:10:54.077 Current LBA Format: LBA Format #04 00:10:54.077 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.077 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.077 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.077 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.077 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.077 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.077 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.077 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.077 00:10:54.077 Get Feature FDP: 00:10:54.077 ================ 00:10:54.077 Enabled: Yes 00:10:54.077 FDP configuration index: 0 00:10:54.077 00:10:54.077 FDP configurations log page 00:10:54.077 =========================== 00:10:54.077 Number of FDP configurations: 1 00:10:54.077 Version: 0 00:10:54.077 Size: 112 00:10:54.077 FDP Configuration Descriptor: 0 00:10:54.077 Descriptor Size: 96 00:10:54.077 Reclaim Group Identifier format: 2 00:10:54.077 FDP Volatile Write Cache: Not Present 00:10:54.077 FDP Configuration: Valid 00:10:54.077 Vendor Specific Size: 0 00:10:54.077 Number of Reclaim Groups: 2 00:10:54.077 Number of Recalim Unit Handles: 8 00:10:54.077 Max Placement Identifiers: 128 00:10:54.077 Number of Namespaces Suppprted: 256 00:10:54.077 Reclaim unit Nominal Size: 6000000 bytes 00:10:54.077 Estimated Reclaim Unit Time Limit: Not Reported 00:10:54.077 RUH Desc #000: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #001: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #002: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #003: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #004: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #005: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #006: RUH Type: Initially Isolated 00:10:54.077 RUH Desc #007: RUH Type: Initially Isolated 00:10:54.077 00:10:54.077 FDP reclaim unit handle usage log page 00:10:54.077 ====================================== 00:10:54.077 Number of Reclaim Unit Handles: 8 00:10:54.077 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:54.077 RUH Usage Desc #001: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #002: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #003: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #004: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #005: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #006: RUH Attributes: Unused 00:10:54.077 RUH Usage Desc #007: RUH Attributes: Unused 00:10:54.077 00:10:54.077 FDP statistics log page 00:10:54.077 ======================= 00:10:54.077 Host bytes with metadata written: 499687424 00:10:54.077 Med[2024-11-20 08:23:41.600731] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63802 terminated unexpected 00:10:54.077 ia bytes with metadata written: 499740672 00:10:54.077 Media bytes erased: 0 00:10:54.077 00:10:54.077 FDP events log page 00:10:54.077 =================== 00:10:54.077 Number of FDP events: 0 00:10:54.077 00:10:54.077 NVM Specific Namespace Data 00:10:54.077 =========================== 00:10:54.077 Logical Block Storage Tag Mask: 0 00:10:54.077 Protection Information Capabilities: 00:10:54.077 16b Guard Protection Information Storage Tag Support: No 00:10:54.077 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.077 Storage Tag Check Read Support: No 00:10:54.077 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.077 ===================================================== 00:10:54.077 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:54.077 ===================================================== 00:10:54.077 Controller Capabilities/Features 00:10:54.077 ================================ 00:10:54.077 Vendor ID: 1b36 00:10:54.077 Subsystem Vendor ID: 1af4 00:10:54.077 Serial Number: 12342 00:10:54.077 Model Number: QEMU NVMe Ctrl 00:10:54.077 Firmware Version: 8.0.0 00:10:54.077 Recommended Arb Burst: 6 00:10:54.077 IEEE OUI Identifier: 00 54 52 00:10:54.077 Multi-path I/O 00:10:54.077 May have multiple subsystem ports: No 00:10:54.077 May have multiple controllers: No 00:10:54.077 Associated with SR-IOV VF: No 00:10:54.077 Max Data Transfer Size: 524288 00:10:54.077 Max Number of Namespaces: 256 00:10:54.077 Max Number of I/O Queues: 64 00:10:54.077 NVMe Specification Version (VS): 1.4 00:10:54.077 NVMe Specification Version (Identify): 1.4 00:10:54.077 Maximum Queue Entries: 2048 00:10:54.077 Contiguous Queues Required: Yes 00:10:54.077 Arbitration Mechanisms Supported 00:10:54.077 Weighted Round Robin: Not Supported 00:10:54.077 Vendor Specific: Not Supported 00:10:54.077 Reset Timeout: 7500 ms 00:10:54.077 Doorbell Stride: 4 bytes 00:10:54.077 NVM Subsystem Reset: Not Supported 00:10:54.077 Command Sets Supported 00:10:54.077 NVM Command Set: Supported 00:10:54.077 Boot Partition: Not Supported 00:10:54.077 Memory Page Size Minimum: 4096 bytes 00:10:54.077 Memory Page Size Maximum: 65536 bytes 00:10:54.077 Persistent Memory Region: Not Supported 00:10:54.077 Optional Asynchronous Events Supported 00:10:54.077 Namespace Attribute Notices: Supported 00:10:54.077 Firmware Activation Notices: Not Supported 00:10:54.077 ANA Change Notices: Not Supported 00:10:54.077 PLE Aggregate Log Change Notices: Not Supported 00:10:54.077 LBA Status Info Alert Notices: Not Supported 00:10:54.077 EGE Aggregate Log Change Notices: Not Supported 00:10:54.077 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.077 Zone Descriptor Change Notices: Not Supported 00:10:54.077 Discovery Log Change Notices: Not Supported 00:10:54.077 Controller Attributes 00:10:54.077 128-bit Host Identifier: Not Supported 00:10:54.078 Non-Operational Permissive Mode: Not Supported 00:10:54.078 NVM Sets: Not Supported 00:10:54.078 Read Recovery Levels: Not Supported 00:10:54.078 Endurance Groups: Not Supported 00:10:54.078 Predictable Latency Mode: Not Supported 00:10:54.078 Traffic Based Keep ALive: Not Supported 00:10:54.078 Namespace Granularity: Not Supported 00:10:54.078 SQ Associations: Not Supported 00:10:54.078 UUID List: Not Supported 00:10:54.078 Multi-Domain Subsystem: Not Supported 00:10:54.078 Fixed Capacity Management: Not Supported 00:10:54.078 Variable Capacity Management: Not Supported 00:10:54.078 Delete Endurance Group: Not Supported 00:10:54.078 Delete NVM Set: Not Supported 00:10:54.078 Extended LBA Formats Supported: Supported 00:10:54.078 Flexible Data Placement Supported: Not Supported 00:10:54.078 00:10:54.078 Controller Memory Buffer Support 00:10:54.078 ================================ 00:10:54.078 Supported: No 00:10:54.078 00:10:54.078 Persistent Memory Region Support 00:10:54.078 ================================ 00:10:54.078 Supported: No 00:10:54.078 00:10:54.078 Admin Command Set Attributes 00:10:54.078 ============================ 00:10:54.078 Security Send/Receive: Not Supported 00:10:54.078 Format NVM: Supported 00:10:54.078 Firmware Activate/Download: Not Supported 00:10:54.078 Namespace Management: Supported 00:10:54.078 Device Self-Test: Not Supported 00:10:54.078 Directives: Supported 00:10:54.078 NVMe-MI: Not Supported 00:10:54.078 Virtualization Management: Not Supported 00:10:54.078 Doorbell Buffer Config: Supported 00:10:54.078 Get LBA Status Capability: Not Supported 00:10:54.078 Command & Feature Lockdown Capability: Not Supported 00:10:54.078 Abort Command Limit: 4 00:10:54.078 Async Event Request Limit: 4 00:10:54.078 Number of Firmware Slots: N/A 00:10:54.078 Firmware Slot 1 Read-Only: N/A 00:10:54.078 Firmware Activation Without Reset: N/A 00:10:54.078 Multiple Update Detection Support: N/A 00:10:54.078 Firmware Update Granularity: No Information Provided 00:10:54.078 Per-Namespace SMART Log: Yes 00:10:54.078 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.078 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:54.078 Command Effects Log Page: Supported 00:10:54.078 Get Log Page Extended Data: Supported 00:10:54.078 Telemetry Log Pages: Not Supported 00:10:54.078 Persistent Event Log Pages: Not Supported 00:10:54.078 Supported Log Pages Log Page: May Support 00:10:54.078 Commands Supported & Effects Log Page: Not Supported 00:10:54.078 Feature Identifiers & Effects Log Page:May Support 00:10:54.078 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.078 Data Area 4 for Telemetry Log: Not Supported 00:10:54.078 Error Log Page Entries Supported: 1 00:10:54.078 Keep Alive: Not Supported 00:10:54.078 00:10:54.078 NVM Command Set Attributes 00:10:54.078 ========================== 00:10:54.078 Submission Queue Entry Size 00:10:54.078 Max: 64 00:10:54.078 Min: 64 00:10:54.078 Completion Queue Entry Size 00:10:54.078 Max: 16 00:10:54.078 Min: 16 00:10:54.078 Number of Namespaces: 256 00:10:54.078 Compare Command: Supported 00:10:54.078 Write Uncorrectable Command: Not Supported 00:10:54.078 Dataset Management Command: Supported 00:10:54.078 Write Zeroes Command: Supported 00:10:54.078 Set Features Save Field: Supported 00:10:54.078 Reservations: Not Supported 00:10:54.078 Timestamp: Supported 00:10:54.078 Copy: Supported 00:10:54.078 Volatile Write Cache: Present 00:10:54.078 Atomic Write Unit (Normal): 1 00:10:54.078 Atomic Write Unit (PFail): 1 00:10:54.078 Atomic Compare & Write Unit: 1 00:10:54.078 Fused Compare & Write: Not Supported 00:10:54.078 Scatter-Gather List 00:10:54.078 SGL Command Set: Supported 00:10:54.078 SGL Keyed: Not Supported 00:10:54.078 SGL Bit Bucket Descriptor: Not Supported 00:10:54.078 SGL Metadata Pointer: Not Supported 00:10:54.078 Oversized SGL: Not Supported 00:10:54.078 SGL Metadata Address: Not Supported 00:10:54.078 SGL Offset: Not Supported 00:10:54.078 Transport SGL Data Block: Not Supported 00:10:54.078 Replay Protected Memory Block: Not Supported 00:10:54.078 00:10:54.078 Firmware Slot Information 00:10:54.078 ========================= 00:10:54.078 Active slot: 1 00:10:54.078 Slot 1 Firmware Revision: 1.0 00:10:54.078 00:10:54.078 00:10:54.078 Commands Supported and Effects 00:10:54.078 ============================== 00:10:54.078 Admin Commands 00:10:54.078 -------------- 00:10:54.078 Delete I/O Submission Queue (00h): Supported 00:10:54.078 Create I/O Submission Queue (01h): Supported 00:10:54.078 Get Log Page (02h): Supported 00:10:54.078 Delete I/O Completion Queue (04h): Supported 00:10:54.078 Create I/O Completion Queue (05h): Supported 00:10:54.078 Identify (06h): Supported 00:10:54.078 Abort (08h): Supported 00:10:54.078 Set Features (09h): Supported 00:10:54.078 Get Features (0Ah): Supported 00:10:54.078 Asynchronous Event Request (0Ch): Supported 00:10:54.078 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.078 Directive Send (19h): Supported 00:10:54.078 Directive Receive (1Ah): Supported 00:10:54.078 Virtualization Management (1Ch): Supported 00:10:54.078 Doorbell Buffer Config (7Ch): Supported 00:10:54.078 Format NVM (80h): Supported LBA-Change 00:10:54.078 I/O Commands 00:10:54.078 ------------ 00:10:54.078 Flush (00h): Supported LBA-Change 00:10:54.078 Write (01h): Supported LBA-Change 00:10:54.078 Read (02h): Supported 00:10:54.078 Compare (05h): Supported 00:10:54.078 Write Zeroes (08h): Supported LBA-Change 00:10:54.078 Dataset Management (09h): Supported LBA-Change 00:10:54.078 Unknown (0Ch): Supported 00:10:54.078 Unknown (12h): Supported 00:10:54.078 Copy (19h): Supported LBA-Change 00:10:54.078 Unknown (1Dh): Supported LBA-Change 00:10:54.078 00:10:54.078 Error Log 00:10:54.078 ========= 00:10:54.078 00:10:54.078 Arbitration 00:10:54.078 =========== 00:10:54.078 Arbitration Burst: no limit 00:10:54.078 00:10:54.078 Power Management 00:10:54.078 ================ 00:10:54.078 Number of Power States: 1 00:10:54.078 Current Power State: Power State #0 00:10:54.078 Power State #0: 00:10:54.078 Max Power: 25.00 W 00:10:54.078 Non-Operational State: Operational 00:10:54.078 Entry Latency: 16 microseconds 00:10:54.078 Exit Latency: 4 microseconds 00:10:54.078 Relative Read Throughput: 0 00:10:54.078 Relative Read Latency: 0 00:10:54.078 Relative Write Throughput: 0 00:10:54.078 Relative Write Latency: 0 00:10:54.078 Idle Power: Not Reported 00:10:54.078 Active Power: Not Reported 00:10:54.078 Non-Operational Permissive Mode: Not Supported 00:10:54.078 00:10:54.078 Health Information 00:10:54.078 ================== 00:10:54.078 Critical Warnings: 00:10:54.078 Available Spare Space: OK 00:10:54.078 Temperature: OK 00:10:54.078 Device Reliability: OK 00:10:54.078 Read Only: No 00:10:54.078 Volatile Memory Backup: OK 00:10:54.078 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.078 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.078 Available Spare: 0% 00:10:54.078 Available Spare Threshold: 0% 00:10:54.078 Life Percentage Used: 0% 00:10:54.078 Data Units Read: 2300 00:10:54.078 Data Units Written: 2100 00:10:54.078 Host Read Commands: 107952 00:10:54.078 Host Write Commands: 106515 00:10:54.078 Controller Busy Time: 0 minutes 00:10:54.078 Power Cycles: 0 00:10:54.078 Power On Hours: 0 hours 00:10:54.078 Unsafe Shutdowns: 0 00:10:54.078 Unrecoverable Media Errors: 0 00:10:54.078 Lifetime Error Log Entries: 0 00:10:54.078 Warning Temperature Time: 0 minutes 00:10:54.078 Critical Temperature Time: 0 minutes 00:10:54.078 00:10:54.078 Number of Queues 00:10:54.078 ================ 00:10:54.078 Number of I/O Submission Queues: 64 00:10:54.078 Number of I/O Completion Queues: 64 00:10:54.078 00:10:54.078 ZNS Specific Controller Data 00:10:54.078 ============================ 00:10:54.078 Zone Append Size Limit: 0 00:10:54.078 00:10:54.078 00:10:54.078 Active Namespaces 00:10:54.078 ================= 00:10:54.078 Namespace ID:1 00:10:54.078 Error Recovery Timeout: Unlimited 00:10:54.078 Command Set Identifier: NVM (00h) 00:10:54.078 Deallocate: Supported 00:10:54.078 Deallocated/Unwritten Error: Supported 00:10:54.078 Deallocated Read Value: All 0x00 00:10:54.078 Deallocate in Write Zeroes: Not Supported 00:10:54.078 Deallocated Guard Field: 0xFFFF 00:10:54.078 Flush: Supported 00:10:54.078 Reservation: Not Supported 00:10:54.078 Namespace Sharing Capabilities: Private 00:10:54.078 Size (in LBAs): 1048576 (4GiB) 00:10:54.078 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.078 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.079 Thin Provisioning: Not Supported 00:10:54.079 Per-NS Atomic Units: No 00:10:54.079 Maximum Single Source Range Length: 128 00:10:54.079 Maximum Copy Length: 128 00:10:54.079 Maximum Source Range Count: 128 00:10:54.079 NGUID/EUI64 Never Reused: No 00:10:54.079 Namespace Write Protected: No 00:10:54.079 Number of LBA Formats: 8 00:10:54.079 Current LBA Format: LBA Format #04 00:10:54.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.079 00:10:54.079 NVM Specific Namespace Data 00:10:54.079 =========================== 00:10:54.079 Logical Block Storage Tag Mask: 0 00:10:54.079 Protection Information Capabilities: 00:10:54.079 16b Guard Protection Information Storage Tag Support: No 00:10:54.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.079 Storage Tag Check Read Support: No 00:10:54.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Namespace ID:2 00:10:54.079 Error Recovery Timeout: Unlimited 00:10:54.079 Command Set Identifier: NVM (00h) 00:10:54.079 Deallocate: Supported 00:10:54.079 Deallocated/Unwritten Error: Supported 00:10:54.079 Deallocated Read Value: All 0x00 00:10:54.079 Deallocate in Write Zeroes: Not Supported 00:10:54.079 Deallocated Guard Field: 0xFFFF 00:10:54.079 Flush: Supported 00:10:54.079 Reservation: Not Supported 00:10:54.079 Namespace Sharing Capabilities: Private 00:10:54.079 Size (in LBAs): 1048576 (4GiB) 00:10:54.079 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.079 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.079 Thin Provisioning: Not Supported 00:10:54.079 Per-NS Atomic Units: No 00:10:54.079 Maximum Single Source Range Length: 128 00:10:54.079 Maximum Copy Length: 128 00:10:54.079 Maximum Source Range Count: 128 00:10:54.079 NGUID/EUI64 Never Reused: No 00:10:54.079 Namespace Write Protected: No 00:10:54.079 Number of LBA Formats: 8 00:10:54.079 Current LBA Format: LBA Format #04 00:10:54.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.079 00:10:54.079 NVM Specific Namespace Data 00:10:54.079 =========================== 00:10:54.079 Logical Block Storage Tag Mask: 0 00:10:54.079 Protection Information Capabilities: 00:10:54.079 16b Guard Protection Information Storage Tag Support: No 00:10:54.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.079 Storage Tag Check Read Support: No 00:10:54.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.079 Namespace ID:3 00:10:54.079 Error Recovery Timeout: Unlimited 00:10:54.079 Command Set Identifier: NVM (00h) 00:10:54.079 Deallocate: Supported 00:10:54.079 Deallocated/Unwritten Error: Supported 00:10:54.079 Deallocated Read Value: All 0x00 00:10:54.079 Deallocate in Write Zeroes: Not Supported 00:10:54.079 Deallocated Guard Field: 0xFFFF 00:10:54.079 Flush: Supported 00:10:54.079 Reservation: Not Supported 00:10:54.079 Namespace Sharing Capabilities: Private 00:10:54.079 Size (in LBAs): 1048576 (4GiB) 00:10:54.338 Capacity (in LBAs): 1048576 (4GiB) 00:10:54.338 Utilization (in LBAs): 1048576 (4GiB) 00:10:54.338 Thin Provisioning: Not Supported 00:10:54.338 Per-NS Atomic Units: No 00:10:54.338 Maximum Single Source Range Length: 128 00:10:54.338 Maximum Copy Length: 128 00:10:54.338 Maximum Source Range Count: 128 00:10:54.338 NGUID/EUI64 Never Reused: No 00:10:54.338 Namespace Write Protected: No 00:10:54.338 Number of LBA Formats: 8 00:10:54.338 Current LBA Format: LBA Format #04 00:10:54.338 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.338 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.338 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.338 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.338 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.338 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.338 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.338 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.338 00:10:54.338 NVM Specific Namespace Data 00:10:54.338 =========================== 00:10:54.338 Logical Block Storage Tag Mask: 0 00:10:54.338 Protection Information Capabilities: 00:10:54.338 16b Guard Protection Information Storage Tag Support: No 00:10:54.338 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.338 Storage Tag Check Read Support: No 00:10:54.338 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.338 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.338 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:54.598 ===================================================== 00:10:54.598 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:54.598 ===================================================== 00:10:54.598 Controller Capabilities/Features 00:10:54.598 ================================ 00:10:54.598 Vendor ID: 1b36 00:10:54.598 Subsystem Vendor ID: 1af4 00:10:54.598 Serial Number: 12340 00:10:54.598 Model Number: QEMU NVMe Ctrl 00:10:54.598 Firmware Version: 8.0.0 00:10:54.598 Recommended Arb Burst: 6 00:10:54.598 IEEE OUI Identifier: 00 54 52 00:10:54.598 Multi-path I/O 00:10:54.598 May have multiple subsystem ports: No 00:10:54.598 May have multiple controllers: No 00:10:54.598 Associated with SR-IOV VF: No 00:10:54.598 Max Data Transfer Size: 524288 00:10:54.598 Max Number of Namespaces: 256 00:10:54.598 Max Number of I/O Queues: 64 00:10:54.599 NVMe Specification Version (VS): 1.4 00:10:54.599 NVMe Specification Version (Identify): 1.4 00:10:54.599 Maximum Queue Entries: 2048 00:10:54.599 Contiguous Queues Required: Yes 00:10:54.599 Arbitration Mechanisms Supported 00:10:54.599 Weighted Round Robin: Not Supported 00:10:54.599 Vendor Specific: Not Supported 00:10:54.599 Reset Timeout: 7500 ms 00:10:54.599 Doorbell Stride: 4 bytes 00:10:54.599 NVM Subsystem Reset: Not Supported 00:10:54.599 Command Sets Supported 00:10:54.599 NVM Command Set: Supported 00:10:54.599 Boot Partition: Not Supported 00:10:54.599 Memory Page Size Minimum: 4096 bytes 00:10:54.599 Memory Page Size Maximum: 65536 bytes 00:10:54.599 Persistent Memory Region: Not Supported 00:10:54.599 Optional Asynchronous Events Supported 00:10:54.599 Namespace Attribute Notices: Supported 00:10:54.599 Firmware Activation Notices: Not Supported 00:10:54.599 ANA Change Notices: Not Supported 00:10:54.599 PLE Aggregate Log Change Notices: Not Supported 00:10:54.599 LBA Status Info Alert Notices: Not Supported 00:10:54.599 EGE Aggregate Log Change Notices: Not Supported 00:10:54.599 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.599 Zone Descriptor Change Notices: Not Supported 00:10:54.599 Discovery Log Change Notices: Not Supported 00:10:54.599 Controller Attributes 00:10:54.599 128-bit Host Identifier: Not Supported 00:10:54.599 Non-Operational Permissive Mode: Not Supported 00:10:54.599 NVM Sets: Not Supported 00:10:54.599 Read Recovery Levels: Not Supported 00:10:54.599 Endurance Groups: Not Supported 00:10:54.599 Predictable Latency Mode: Not Supported 00:10:54.599 Traffic Based Keep ALive: Not Supported 00:10:54.599 Namespace Granularity: Not Supported 00:10:54.599 SQ Associations: Not Supported 00:10:54.599 UUID List: Not Supported 00:10:54.599 Multi-Domain Subsystem: Not Supported 00:10:54.599 Fixed Capacity Management: Not Supported 00:10:54.599 Variable Capacity Management: Not Supported 00:10:54.599 Delete Endurance Group: Not Supported 00:10:54.599 Delete NVM Set: Not Supported 00:10:54.599 Extended LBA Formats Supported: Supported 00:10:54.599 Flexible Data Placement Supported: Not Supported 00:10:54.599 00:10:54.599 Controller Memory Buffer Support 00:10:54.599 ================================ 00:10:54.599 Supported: No 00:10:54.599 00:10:54.599 Persistent Memory Region Support 00:10:54.599 ================================ 00:10:54.599 Supported: No 00:10:54.599 00:10:54.599 Admin Command Set Attributes 00:10:54.599 ============================ 00:10:54.599 Security Send/Receive: Not Supported 00:10:54.599 Format NVM: Supported 00:10:54.599 Firmware Activate/Download: Not Supported 00:10:54.599 Namespace Management: Supported 00:10:54.599 Device Self-Test: Not Supported 00:10:54.599 Directives: Supported 00:10:54.599 NVMe-MI: Not Supported 00:10:54.599 Virtualization Management: Not Supported 00:10:54.599 Doorbell Buffer Config: Supported 00:10:54.599 Get LBA Status Capability: Not Supported 00:10:54.599 Command & Feature Lockdown Capability: Not Supported 00:10:54.599 Abort Command Limit: 4 00:10:54.599 Async Event Request Limit: 4 00:10:54.599 Number of Firmware Slots: N/A 00:10:54.599 Firmware Slot 1 Read-Only: N/A 00:10:54.599 Firmware Activation Without Reset: N/A 00:10:54.599 Multiple Update Detection Support: N/A 00:10:54.600 Firmware Update Granularity: No Information Provided 00:10:54.600 Per-Namespace SMART Log: Yes 00:10:54.600 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.600 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:54.600 Command Effects Log Page: Supported 00:10:54.600 Get Log Page Extended Data: Supported 00:10:54.600 Telemetry Log Pages: Not Supported 00:10:54.600 Persistent Event Log Pages: Not Supported 00:10:54.600 Supported Log Pages Log Page: May Support 00:10:54.600 Commands Supported & Effects Log Page: Not Supported 00:10:54.600 Feature Identifiers & Effects Log Page:May Support 00:10:54.600 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.600 Data Area 4 for Telemetry Log: Not Supported 00:10:54.600 Error Log Page Entries Supported: 1 00:10:54.600 Keep Alive: Not Supported 00:10:54.600 00:10:54.600 NVM Command Set Attributes 00:10:54.600 ========================== 00:10:54.600 Submission Queue Entry Size 00:10:54.600 Max: 64 00:10:54.600 Min: 64 00:10:54.600 Completion Queue Entry Size 00:10:54.600 Max: 16 00:10:54.600 Min: 16 00:10:54.600 Number of Namespaces: 256 00:10:54.600 Compare Command: Supported 00:10:54.600 Write Uncorrectable Command: Not Supported 00:10:54.600 Dataset Management Command: Supported 00:10:54.600 Write Zeroes Command: Supported 00:10:54.600 Set Features Save Field: Supported 00:10:54.600 Reservations: Not Supported 00:10:54.600 Timestamp: Supported 00:10:54.600 Copy: Supported 00:10:54.600 Volatile Write Cache: Present 00:10:54.600 Atomic Write Unit (Normal): 1 00:10:54.600 Atomic Write Unit (PFail): 1 00:10:54.600 Atomic Compare & Write Unit: 1 00:10:54.600 Fused Compare & Write: Not Supported 00:10:54.600 Scatter-Gather List 00:10:54.600 SGL Command Set: Supported 00:10:54.600 SGL Keyed: Not Supported 00:10:54.600 SGL Bit Bucket Descriptor: Not Supported 00:10:54.600 SGL Metadata Pointer: Not Supported 00:10:54.600 Oversized SGL: Not Supported 00:10:54.600 SGL Metadata Address: Not Supported 00:10:54.600 SGL Offset: Not Supported 00:10:54.600 Transport SGL Data Block: Not Supported 00:10:54.600 Replay Protected Memory Block: Not Supported 00:10:54.600 00:10:54.600 Firmware Slot Information 00:10:54.600 ========================= 00:10:54.600 Active slot: 1 00:10:54.600 Slot 1 Firmware Revision: 1.0 00:10:54.600 00:10:54.600 00:10:54.600 Commands Supported and Effects 00:10:54.600 ============================== 00:10:54.600 Admin Commands 00:10:54.600 -------------- 00:10:54.600 Delete I/O Submission Queue (00h): Supported 00:10:54.600 Create I/O Submission Queue (01h): Supported 00:10:54.600 Get Log Page (02h): Supported 00:10:54.600 Delete I/O Completion Queue (04h): Supported 00:10:54.600 Create I/O Completion Queue (05h): Supported 00:10:54.600 Identify (06h): Supported 00:10:54.600 Abort (08h): Supported 00:10:54.600 Set Features (09h): Supported 00:10:54.600 Get Features (0Ah): Supported 00:10:54.600 Asynchronous Event Request (0Ch): Supported 00:10:54.600 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.600 Directive Send (19h): Supported 00:10:54.600 Directive Receive (1Ah): Supported 00:10:54.600 Virtualization Management (1Ch): Supported 00:10:54.600 Doorbell Buffer Config (7Ch): Supported 00:10:54.600 Format NVM (80h): Supported LBA-Change 00:10:54.600 I/O Commands 00:10:54.600 ------------ 00:10:54.601 Flush (00h): Supported LBA-Change 00:10:54.601 Write (01h): Supported LBA-Change 00:10:54.601 Read (02h): Supported 00:10:54.601 Compare (05h): Supported 00:10:54.601 Write Zeroes (08h): Supported LBA-Change 00:10:54.601 Dataset Management (09h): Supported LBA-Change 00:10:54.601 Unknown (0Ch): Supported 00:10:54.601 Unknown (12h): Supported 00:10:54.601 Copy (19h): Supported LBA-Change 00:10:54.601 Unknown (1Dh): Supported LBA-Change 00:10:54.601 00:10:54.601 Error Log 00:10:54.601 ========= 00:10:54.601 00:10:54.601 Arbitration 00:10:54.601 =========== 00:10:54.601 Arbitration Burst: no limit 00:10:54.601 00:10:54.601 Power Management 00:10:54.601 ================ 00:10:54.601 Number of Power States: 1 00:10:54.601 Current Power State: Power State #0 00:10:54.601 Power State #0: 00:10:54.601 Max Power: 25.00 W 00:10:54.601 Non-Operational State: Operational 00:10:54.601 Entry Latency: 16 microseconds 00:10:54.601 Exit Latency: 4 microseconds 00:10:54.601 Relative Read Throughput: 0 00:10:54.601 Relative Read Latency: 0 00:10:54.601 Relative Write Throughput: 0 00:10:54.601 Relative Write Latency: 0 00:10:54.601 Idle Power: Not Reported 00:10:54.601 Active Power: Not Reported 00:10:54.601 Non-Operational Permissive Mode: Not Supported 00:10:54.601 00:10:54.601 Health Information 00:10:54.601 ================== 00:10:54.601 Critical Warnings: 00:10:54.601 Available Spare Space: OK 00:10:54.601 Temperature: OK 00:10:54.601 Device Reliability: OK 00:10:54.601 Read Only: No 00:10:54.601 Volatile Memory Backup: OK 00:10:54.601 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.601 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.601 Available Spare: 0% 00:10:54.601 Available Spare Threshold: 0% 00:10:54.601 Life Percentage Used: 0% 00:10:54.601 Data Units Read: 714 00:10:54.601 Data Units Written: 647 00:10:54.601 Host Read Commands: 35244 00:10:54.601 Host Write Commands: 35118 00:10:54.601 Controller Busy Time: 0 minutes 00:10:54.601 Power Cycles: 0 00:10:54.601 Power On Hours: 0 hours 00:10:54.601 Unsafe Shutdowns: 0 00:10:54.601 Unrecoverable Media Errors: 0 00:10:54.601 Lifetime Error Log Entries: 0 00:10:54.601 Warning Temperature Time: 0 minutes 00:10:54.601 Critical Temperature Time: 0 minutes 00:10:54.601 00:10:54.601 Number of Queues 00:10:54.601 ================ 00:10:54.601 Number of I/O Submission Queues: 64 00:10:54.601 Number of I/O Completion Queues: 64 00:10:54.601 00:10:54.601 ZNS Specific Controller Data 00:10:54.601 ============================ 00:10:54.601 Zone Append Size Limit: 0 00:10:54.601 00:10:54.601 00:10:54.601 Active Namespaces 00:10:54.601 ================= 00:10:54.601 Namespace ID:1 00:10:54.601 Error Recovery Timeout: Unlimited 00:10:54.601 Command Set Identifier: NVM (00h) 00:10:54.601 Deallocate: Supported 00:10:54.601 Deallocated/Unwritten Error: Supported 00:10:54.601 Deallocated Read Value: All 0x00 00:10:54.601 Deallocate in Write Zeroes: Not Supported 00:10:54.601 Deallocated Guard Field: 0xFFFF 00:10:54.602 Flush: Supported 00:10:54.602 Reservation: Not Supported 00:10:54.602 Metadata Transferred as: Separate Metadata Buffer 00:10:54.602 Namespace Sharing Capabilities: Private 00:10:54.602 Size (in LBAs): 1548666 (5GiB) 00:10:54.602 Capacity (in LBAs): 1548666 (5GiB) 00:10:54.602 Utilization (in LBAs): 1548666 (5GiB) 00:10:54.602 Thin Provisioning: Not Supported 00:10:54.602 Per-NS Atomic Units: No 00:10:54.602 Maximum Single Source Range Length: 128 00:10:54.602 Maximum Copy Length: 128 00:10:54.602 Maximum Source Range Count: 128 00:10:54.602 NGUID/EUI64 Never Reused: No 00:10:54.602 Namespace Write Protected: No 00:10:54.602 Number of LBA Formats: 8 00:10:54.602 Current LBA Format: LBA Format #07 00:10:54.602 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.602 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.602 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.602 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.602 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.602 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.602 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.602 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.602 00:10:54.602 NVM Specific Namespace Data 00:10:54.602 =========================== 00:10:54.602 Logical Block Storage Tag Mask: 0 00:10:54.602 Protection Information Capabilities: 00:10:54.602 16b Guard Protection Information Storage Tag Support: No 00:10:54.602 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.602 Storage Tag Check Read Support: No 00:10:54.602 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.602 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.602 08:23:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:54.862 ===================================================== 00:10:54.862 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:54.862 ===================================================== 00:10:54.862 Controller Capabilities/Features 00:10:54.862 ================================ 00:10:54.862 Vendor ID: 1b36 00:10:54.862 Subsystem Vendor ID: 1af4 00:10:54.862 Serial Number: 12341 00:10:54.862 Model Number: QEMU NVMe Ctrl 00:10:54.862 Firmware Version: 8.0.0 00:10:54.862 Recommended Arb Burst: 6 00:10:54.862 IEEE OUI Identifier: 00 54 52 00:10:54.862 Multi-path I/O 00:10:54.863 May have multiple subsystem ports: No 00:10:54.863 May have multiple controllers: No 00:10:54.863 Associated with SR-IOV VF: No 00:10:54.863 Max Data Transfer Size: 524288 00:10:54.863 Max Number of Namespaces: 256 00:10:54.863 Max Number of I/O Queues: 64 00:10:54.863 NVMe Specification Version (VS): 1.4 00:10:54.863 NVMe Specification Version (Identify): 1.4 00:10:54.863 Maximum Queue Entries: 2048 00:10:54.863 Contiguous Queues Required: Yes 00:10:54.863 Arbitration Mechanisms Supported 00:10:54.863 Weighted Round Robin: Not Supported 00:10:54.863 Vendor Specific: Not Supported 00:10:54.863 Reset Timeout: 7500 ms 00:10:54.863 Doorbell Stride: 4 bytes 00:10:54.863 NVM Subsystem Reset: Not Supported 00:10:54.863 Command Sets Supported 00:10:54.863 NVM Command Set: Supported 00:10:54.863 Boot Partition: Not Supported 00:10:54.863 Memory Page Size Minimum: 4096 bytes 00:10:54.863 Memory Page Size Maximum: 65536 bytes 00:10:54.863 Persistent Memory Region: Not Supported 00:10:54.863 Optional Asynchronous Events Supported 00:10:54.863 Namespace Attribute Notices: Supported 00:10:54.863 Firmware Activation Notices: Not Supported 00:10:54.863 ANA Change Notices: Not Supported 00:10:54.863 PLE Aggregate Log Change Notices: Not Supported 00:10:54.863 LBA Status Info Alert Notices: Not Supported 00:10:54.863 EGE Aggregate Log Change Notices: Not Supported 00:10:54.863 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.863 Zone Descriptor Change Notices: Not Supported 00:10:54.863 Discovery Log Change Notices: Not Supported 00:10:54.863 Controller Attributes 00:10:54.863 128-bit Host Identifier: Not Supported 00:10:54.863 Non-Operational Permissive Mode: Not Supported 00:10:54.863 NVM Sets: Not Supported 00:10:54.863 Read Recovery Levels: Not Supported 00:10:54.863 Endurance Groups: Not Supported 00:10:54.863 Predictable Latency Mode: Not Supported 00:10:54.863 Traffic Based Keep ALive: Not Supported 00:10:54.863 Namespace Granularity: Not Supported 00:10:54.863 SQ Associations: Not Supported 00:10:54.863 UUID List: Not Supported 00:10:54.863 Multi-Domain Subsystem: Not Supported 00:10:54.863 Fixed Capacity Management: Not Supported 00:10:54.863 Variable Capacity Management: Not Supported 00:10:54.863 Delete Endurance Group: Not Supported 00:10:54.863 Delete NVM Set: Not Supported 00:10:54.863 Extended LBA Formats Supported: Supported 00:10:54.863 Flexible Data Placement Supported: Not Supported 00:10:54.863 00:10:54.863 Controller Memory Buffer Support 00:10:54.863 ================================ 00:10:54.863 Supported: No 00:10:54.863 00:10:54.863 Persistent Memory Region Support 00:10:54.863 ================================ 00:10:54.863 Supported: No 00:10:54.863 00:10:54.863 Admin Command Set Attributes 00:10:54.863 ============================ 00:10:54.863 Security Send/Receive: Not Supported 00:10:54.863 Format NVM: Supported 00:10:54.863 Firmware Activate/Download: Not Supported 00:10:54.863 Namespace Management: Supported 00:10:54.863 Device Self-Test: Not Supported 00:10:54.863 Directives: Supported 00:10:54.863 NVMe-MI: Not Supported 00:10:54.863 Virtualization Management: Not Supported 00:10:54.863 Doorbell Buffer Config: Supported 00:10:54.863 Get LBA Status Capability: Not Supported 00:10:54.863 Command & Feature Lockdown Capability: Not Supported 00:10:54.863 Abort Command Limit: 4 00:10:54.863 Async Event Request Limit: 4 00:10:54.863 Number of Firmware Slots: N/A 00:10:54.863 Firmware Slot 1 Read-Only: N/A 00:10:54.863 Firmware Activation Without Reset: N/A 00:10:54.863 Multiple Update Detection Support: N/A 00:10:54.863 Firmware Update Granularity: No Information Provided 00:10:54.863 Per-Namespace SMART Log: Yes 00:10:54.863 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.863 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:54.863 Command Effects Log Page: Supported 00:10:54.863 Get Log Page Extended Data: Supported 00:10:54.863 Telemetry Log Pages: Not Supported 00:10:54.863 Persistent Event Log Pages: Not Supported 00:10:54.863 Supported Log Pages Log Page: May Support 00:10:54.863 Commands Supported & Effects Log Page: Not Supported 00:10:54.863 Feature Identifiers & Effects Log Page:May Support 00:10:54.863 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.863 Data Area 4 for Telemetry Log: Not Supported 00:10:54.863 Error Log Page Entries Supported: 1 00:10:54.863 Keep Alive: Not Supported 00:10:54.863 00:10:54.863 NVM Command Set Attributes 00:10:54.863 ========================== 00:10:54.863 Submission Queue Entry Size 00:10:54.863 Max: 64 00:10:54.863 Min: 64 00:10:54.863 Completion Queue Entry Size 00:10:54.863 Max: 16 00:10:54.863 Min: 16 00:10:54.863 Number of Namespaces: 256 00:10:54.863 Compare Command: Supported 00:10:54.863 Write Uncorrectable Command: Not Supported 00:10:54.863 Dataset Management Command: Supported 00:10:54.863 Write Zeroes Command: Supported 00:10:54.863 Set Features Save Field: Supported 00:10:54.863 Reservations: Not Supported 00:10:54.863 Timestamp: Supported 00:10:54.863 Copy: Supported 00:10:54.863 Volatile Write Cache: Present 00:10:54.863 Atomic Write Unit (Normal): 1 00:10:54.863 Atomic Write Unit (PFail): 1 00:10:54.863 Atomic Compare & Write Unit: 1 00:10:54.863 Fused Compare & Write: Not Supported 00:10:54.863 Scatter-Gather List 00:10:54.863 SGL Command Set: Supported 00:10:54.863 SGL Keyed: Not Supported 00:10:54.863 SGL Bit Bucket Descriptor: Not Supported 00:10:54.863 SGL Metadata Pointer: Not Supported 00:10:54.863 Oversized SGL: Not Supported 00:10:54.863 SGL Metadata Address: Not Supported 00:10:54.863 SGL Offset: Not Supported 00:10:54.863 Transport SGL Data Block: Not Supported 00:10:54.863 Replay Protected Memory Block: Not Supported 00:10:54.863 00:10:54.863 Firmware Slot Information 00:10:54.863 ========================= 00:10:54.863 Active slot: 1 00:10:54.863 Slot 1 Firmware Revision: 1.0 00:10:54.863 00:10:54.863 00:10:54.863 Commands Supported and Effects 00:10:54.863 ============================== 00:10:54.863 Admin Commands 00:10:54.863 -------------- 00:10:54.863 Delete I/O Submission Queue (00h): Supported 00:10:54.863 Create I/O Submission Queue (01h): Supported 00:10:54.863 Get Log Page (02h): Supported 00:10:54.863 Delete I/O Completion Queue (04h): Supported 00:10:54.863 Create I/O Completion Queue (05h): Supported 00:10:54.863 Identify (06h): Supported 00:10:54.863 Abort (08h): Supported 00:10:54.863 Set Features (09h): Supported 00:10:54.863 Get Features (0Ah): Supported 00:10:54.863 Asynchronous Event Request (0Ch): Supported 00:10:54.863 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:54.863 Directive Send (19h): Supported 00:10:54.863 Directive Receive (1Ah): Supported 00:10:54.863 Virtualization Management (1Ch): Supported 00:10:54.863 Doorbell Buffer Config (7Ch): Supported 00:10:54.863 Format NVM (80h): Supported LBA-Change 00:10:54.863 I/O Commands 00:10:54.863 ------------ 00:10:54.863 Flush (00h): Supported LBA-Change 00:10:54.863 Write (01h): Supported LBA-Change 00:10:54.863 Read (02h): Supported 00:10:54.863 Compare (05h): Supported 00:10:54.863 Write Zeroes (08h): Supported LBA-Change 00:10:54.863 Dataset Management (09h): Supported LBA-Change 00:10:54.863 Unknown (0Ch): Supported 00:10:54.863 Unknown (12h): Supported 00:10:54.863 Copy (19h): Supported LBA-Change 00:10:54.863 Unknown (1Dh): Supported LBA-Change 00:10:54.863 00:10:54.863 Error Log 00:10:54.863 ========= 00:10:54.863 00:10:54.863 Arbitration 00:10:54.863 =========== 00:10:54.863 Arbitration Burst: no limit 00:10:54.863 00:10:54.863 Power Management 00:10:54.863 ================ 00:10:54.863 Number of Power States: 1 00:10:54.863 Current Power State: Power State #0 00:10:54.863 Power State #0: 00:10:54.863 Max Power: 25.00 W 00:10:54.863 Non-Operational State: Operational 00:10:54.863 Entry Latency: 16 microseconds 00:10:54.863 Exit Latency: 4 microseconds 00:10:54.863 Relative Read Throughput: 0 00:10:54.863 Relative Read Latency: 0 00:10:54.863 Relative Write Throughput: 0 00:10:54.863 Relative Write Latency: 0 00:10:54.863 Idle Power: Not Reported 00:10:54.863 Active Power: Not Reported 00:10:54.863 Non-Operational Permissive Mode: Not Supported 00:10:54.863 00:10:54.863 Health Information 00:10:54.863 ================== 00:10:54.863 Critical Warnings: 00:10:54.863 Available Spare Space: OK 00:10:54.863 Temperature: OK 00:10:54.863 Device Reliability: OK 00:10:54.863 Read Only: No 00:10:54.863 Volatile Memory Backup: OK 00:10:54.863 Current Temperature: 323 Kelvin (50 Celsius) 00:10:54.863 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:54.863 Available Spare: 0% 00:10:54.863 Available Spare Threshold: 0% 00:10:54.863 Life Percentage Used: 0% 00:10:54.863 Data Units Read: 1156 00:10:54.863 Data Units Written: 1020 00:10:54.863 Host Read Commands: 53352 00:10:54.863 Host Write Commands: 52139 00:10:54.863 Controller Busy Time: 0 minutes 00:10:54.863 Power Cycles: 0 00:10:54.863 Power On Hours: 0 hours 00:10:54.863 Unsafe Shutdowns: 0 00:10:54.863 Unrecoverable Media Errors: 0 00:10:54.863 Lifetime Error Log Entries: 0 00:10:54.863 Warning Temperature Time: 0 minutes 00:10:54.863 Critical Temperature Time: 0 minutes 00:10:54.863 00:10:54.863 Number of Queues 00:10:54.863 ================ 00:10:54.863 Number of I/O Submission Queues: 64 00:10:54.863 Number of I/O Completion Queues: 64 00:10:54.863 00:10:54.863 ZNS Specific Controller Data 00:10:54.863 ============================ 00:10:54.863 Zone Append Size Limit: 0 00:10:54.863 00:10:54.863 00:10:54.863 Active Namespaces 00:10:54.863 ================= 00:10:54.863 Namespace ID:1 00:10:54.863 Error Recovery Timeout: Unlimited 00:10:54.863 Command Set Identifier: NVM (00h) 00:10:54.863 Deallocate: Supported 00:10:54.863 Deallocated/Unwritten Error: Supported 00:10:54.863 Deallocated Read Value: All 0x00 00:10:54.863 Deallocate in Write Zeroes: Not Supported 00:10:54.863 Deallocated Guard Field: 0xFFFF 00:10:54.863 Flush: Supported 00:10:54.863 Reservation: Not Supported 00:10:54.863 Namespace Sharing Capabilities: Private 00:10:54.863 Size (in LBAs): 1310720 (5GiB) 00:10:54.863 Capacity (in LBAs): 1310720 (5GiB) 00:10:54.863 Utilization (in LBAs): 1310720 (5GiB) 00:10:54.863 Thin Provisioning: Not Supported 00:10:54.863 Per-NS Atomic Units: No 00:10:54.863 Maximum Single Source Range Length: 128 00:10:54.863 Maximum Copy Length: 128 00:10:54.863 Maximum Source Range Count: 128 00:10:54.863 NGUID/EUI64 Never Reused: No 00:10:54.863 Namespace Write Protected: No 00:10:54.863 Number of LBA Formats: 8 00:10:54.863 Current LBA Format: LBA Format #04 00:10:54.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.863 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:54.863 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:54.863 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:54.863 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:54.863 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:54.863 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:54.863 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:54.863 00:10:54.863 NVM Specific Namespace Data 00:10:54.863 =========================== 00:10:54.863 Logical Block Storage Tag Mask: 0 00:10:54.863 Protection Information Capabilities: 00:10:54.863 16b Guard Protection Information Storage Tag Support: No 00:10:54.863 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:54.863 Storage Tag Check Read Support: No 00:10:54.863 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.863 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.863 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.863 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.863 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.863 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.864 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.864 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:54.864 08:23:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:54.864 08:23:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:55.123 ===================================================== 00:10:55.123 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:55.123 ===================================================== 00:10:55.123 Controller Capabilities/Features 00:10:55.123 ================================ 00:10:55.123 Vendor ID: 1b36 00:10:55.123 Subsystem Vendor ID: 1af4 00:10:55.123 Serial Number: 12342 00:10:55.123 Model Number: QEMU NVMe Ctrl 00:10:55.123 Firmware Version: 8.0.0 00:10:55.123 Recommended Arb Burst: 6 00:10:55.123 IEEE OUI Identifier: 00 54 52 00:10:55.123 Multi-path I/O 00:10:55.123 May have multiple subsystem ports: No 00:10:55.123 May have multiple controllers: No 00:10:55.123 Associated with SR-IOV VF: No 00:10:55.123 Max Data Transfer Size: 524288 00:10:55.123 Max Number of Namespaces: 256 00:10:55.123 Max Number of I/O Queues: 64 00:10:55.123 NVMe Specification Version (VS): 1.4 00:10:55.123 NVMe Specification Version (Identify): 1.4 00:10:55.123 Maximum Queue Entries: 2048 00:10:55.123 Contiguous Queues Required: Yes 00:10:55.123 Arbitration Mechanisms Supported 00:10:55.123 Weighted Round Robin: Not Supported 00:10:55.123 Vendor Specific: Not Supported 00:10:55.123 Reset Timeout: 7500 ms 00:10:55.123 Doorbell Stride: 4 bytes 00:10:55.123 NVM Subsystem Reset: Not Supported 00:10:55.123 Command Sets Supported 00:10:55.123 NVM Command Set: Supported 00:10:55.123 Boot Partition: Not Supported 00:10:55.123 Memory Page Size Minimum: 4096 bytes 00:10:55.123 Memory Page Size Maximum: 65536 bytes 00:10:55.123 Persistent Memory Region: Not Supported 00:10:55.123 Optional Asynchronous Events Supported 00:10:55.123 Namespace Attribute Notices: Supported 00:10:55.123 Firmware Activation Notices: Not Supported 00:10:55.123 ANA Change Notices: Not Supported 00:10:55.123 PLE Aggregate Log Change Notices: Not Supported 00:10:55.123 LBA Status Info Alert Notices: Not Supported 00:10:55.123 EGE Aggregate Log Change Notices: Not Supported 00:10:55.123 Normal NVM Subsystem Shutdown event: Not Supported 00:10:55.123 Zone Descriptor Change Notices: Not Supported 00:10:55.123 Discovery Log Change Notices: Not Supported 00:10:55.123 Controller Attributes 00:10:55.123 128-bit Host Identifier: Not Supported 00:10:55.123 Non-Operational Permissive Mode: Not Supported 00:10:55.123 NVM Sets: Not Supported 00:10:55.123 Read Recovery Levels: Not Supported 00:10:55.123 Endurance Groups: Not Supported 00:10:55.123 Predictable Latency Mode: Not Supported 00:10:55.123 Traffic Based Keep ALive: Not Supported 00:10:55.123 Namespace Granularity: Not Supported 00:10:55.123 SQ Associations: Not Supported 00:10:55.123 UUID List: Not Supported 00:10:55.123 Multi-Domain Subsystem: Not Supported 00:10:55.123 Fixed Capacity Management: Not Supported 00:10:55.123 Variable Capacity Management: Not Supported 00:10:55.123 Delete Endurance Group: Not Supported 00:10:55.123 Delete NVM Set: Not Supported 00:10:55.123 Extended LBA Formats Supported: Supported 00:10:55.123 Flexible Data Placement Supported: Not Supported 00:10:55.123 00:10:55.123 Controller Memory Buffer Support 00:10:55.123 ================================ 00:10:55.123 Supported: No 00:10:55.123 00:10:55.123 Persistent Memory Region Support 00:10:55.123 ================================ 00:10:55.123 Supported: No 00:10:55.123 00:10:55.123 Admin Command Set Attributes 00:10:55.123 ============================ 00:10:55.123 Security Send/Receive: Not Supported 00:10:55.123 Format NVM: Supported 00:10:55.123 Firmware Activate/Download: Not Supported 00:10:55.123 Namespace Management: Supported 00:10:55.123 Device Self-Test: Not Supported 00:10:55.123 Directives: Supported 00:10:55.123 NVMe-MI: Not Supported 00:10:55.123 Virtualization Management: Not Supported 00:10:55.123 Doorbell Buffer Config: Supported 00:10:55.123 Get LBA Status Capability: Not Supported 00:10:55.123 Command & Feature Lockdown Capability: Not Supported 00:10:55.123 Abort Command Limit: 4 00:10:55.123 Async Event Request Limit: 4 00:10:55.123 Number of Firmware Slots: N/A 00:10:55.123 Firmware Slot 1 Read-Only: N/A 00:10:55.123 Firmware Activation Without Reset: N/A 00:10:55.123 Multiple Update Detection Support: N/A 00:10:55.123 Firmware Update Granularity: No Information Provided 00:10:55.123 Per-Namespace SMART Log: Yes 00:10:55.123 Asymmetric Namespace Access Log Page: Not Supported 00:10:55.123 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:55.123 Command Effects Log Page: Supported 00:10:55.123 Get Log Page Extended Data: Supported 00:10:55.123 Telemetry Log Pages: Not Supported 00:10:55.123 Persistent Event Log Pages: Not Supported 00:10:55.123 Supported Log Pages Log Page: May Support 00:10:55.123 Commands Supported & Effects Log Page: Not Supported 00:10:55.123 Feature Identifiers & Effects Log Page:May Support 00:10:55.123 NVMe-MI Commands & Effects Log Page: May Support 00:10:55.123 Data Area 4 for Telemetry Log: Not Supported 00:10:55.123 Error Log Page Entries Supported: 1 00:10:55.123 Keep Alive: Not Supported 00:10:55.123 00:10:55.123 NVM Command Set Attributes 00:10:55.123 ========================== 00:10:55.123 Submission Queue Entry Size 00:10:55.123 Max: 64 00:10:55.123 Min: 64 00:10:55.123 Completion Queue Entry Size 00:10:55.123 Max: 16 00:10:55.123 Min: 16 00:10:55.123 Number of Namespaces: 256 00:10:55.123 Compare Command: Supported 00:10:55.123 Write Uncorrectable Command: Not Supported 00:10:55.123 Dataset Management Command: Supported 00:10:55.123 Write Zeroes Command: Supported 00:10:55.123 Set Features Save Field: Supported 00:10:55.123 Reservations: Not Supported 00:10:55.123 Timestamp: Supported 00:10:55.123 Copy: Supported 00:10:55.123 Volatile Write Cache: Present 00:10:55.123 Atomic Write Unit (Normal): 1 00:10:55.123 Atomic Write Unit (PFail): 1 00:10:55.123 Atomic Compare & Write Unit: 1 00:10:55.123 Fused Compare & Write: Not Supported 00:10:55.123 Scatter-Gather List 00:10:55.123 SGL Command Set: Supported 00:10:55.123 SGL Keyed: Not Supported 00:10:55.123 SGL Bit Bucket Descriptor: Not Supported 00:10:55.123 SGL Metadata Pointer: Not Supported 00:10:55.123 Oversized SGL: Not Supported 00:10:55.123 SGL Metadata Address: Not Supported 00:10:55.123 SGL Offset: Not Supported 00:10:55.123 Transport SGL Data Block: Not Supported 00:10:55.123 Replay Protected Memory Block: Not Supported 00:10:55.123 00:10:55.123 Firmware Slot Information 00:10:55.123 ========================= 00:10:55.123 Active slot: 1 00:10:55.123 Slot 1 Firmware Revision: 1.0 00:10:55.123 00:10:55.123 00:10:55.123 Commands Supported and Effects 00:10:55.123 ============================== 00:10:55.123 Admin Commands 00:10:55.123 -------------- 00:10:55.123 Delete I/O Submission Queue (00h): Supported 00:10:55.123 Create I/O Submission Queue (01h): Supported 00:10:55.123 Get Log Page (02h): Supported 00:10:55.123 Delete I/O Completion Queue (04h): Supported 00:10:55.123 Create I/O Completion Queue (05h): Supported 00:10:55.123 Identify (06h): Supported 00:10:55.123 Abort (08h): Supported 00:10:55.123 Set Features (09h): Supported 00:10:55.123 Get Features (0Ah): Supported 00:10:55.123 Asynchronous Event Request (0Ch): Supported 00:10:55.123 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:55.123 Directive Send (19h): Supported 00:10:55.123 Directive Receive (1Ah): Supported 00:10:55.123 Virtualization Management (1Ch): Supported 00:10:55.123 Doorbell Buffer Config (7Ch): Supported 00:10:55.124 Format NVM (80h): Supported LBA-Change 00:10:55.124 I/O Commands 00:10:55.124 ------------ 00:10:55.124 Flush (00h): Supported LBA-Change 00:10:55.124 Write (01h): Supported LBA-Change 00:10:55.124 Read (02h): Supported 00:10:55.124 Compare (05h): Supported 00:10:55.124 Write Zeroes (08h): Supported LBA-Change 00:10:55.124 Dataset Management (09h): Supported LBA-Change 00:10:55.124 Unknown (0Ch): Supported 00:10:55.124 Unknown (12h): Supported 00:10:55.124 Copy (19h): Supported LBA-Change 00:10:55.124 Unknown (1Dh): Supported LBA-Change 00:10:55.124 00:10:55.124 Error Log 00:10:55.124 ========= 00:10:55.124 00:10:55.124 Arbitration 00:10:55.124 =========== 00:10:55.124 Arbitration Burst: no limit 00:10:55.124 00:10:55.124 Power Management 00:10:55.124 ================ 00:10:55.124 Number of Power States: 1 00:10:55.124 Current Power State: Power State #0 00:10:55.124 Power State #0: 00:10:55.124 Max Power: 25.00 W 00:10:55.124 Non-Operational State: Operational 00:10:55.124 Entry Latency: 16 microseconds 00:10:55.124 Exit Latency: 4 microseconds 00:10:55.124 Relative Read Throughput: 0 00:10:55.124 Relative Read Latency: 0 00:10:55.124 Relative Write Throughput: 0 00:10:55.124 Relative Write Latency: 0 00:10:55.124 Idle Power: Not Reported 00:10:55.124 Active Power: Not Reported 00:10:55.124 Non-Operational Permissive Mode: Not Supported 00:10:55.124 00:10:55.124 Health Information 00:10:55.124 ================== 00:10:55.124 Critical Warnings: 00:10:55.124 Available Spare Space: OK 00:10:55.124 Temperature: OK 00:10:55.124 Device Reliability: OK 00:10:55.124 Read Only: No 00:10:55.124 Volatile Memory Backup: OK 00:10:55.124 Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.124 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:55.124 Available Spare: 0% 00:10:55.124 Available Spare Threshold: 0% 00:10:55.124 Life Percentage Used: 0% 00:10:55.124 Data Units Read: 2300 00:10:55.124 Data Units Written: 2100 00:10:55.124 Host Read Commands: 107952 00:10:55.124 Host Write Commands: 106515 00:10:55.124 Controller Busy Time: 0 minutes 00:10:55.124 Power Cycles: 0 00:10:55.124 Power On Hours: 0 hours 00:10:55.124 Unsafe Shutdowns: 0 00:10:55.124 Unrecoverable Media Errors: 0 00:10:55.124 Lifetime Error Log Entries: 0 00:10:55.124 Warning Temperature Time: 0 minutes 00:10:55.124 Critical Temperature Time: 0 minutes 00:10:55.124 00:10:55.124 Number of Queues 00:10:55.124 ================ 00:10:55.124 Number of I/O Submission Queues: 64 00:10:55.124 Number of I/O Completion Queues: 64 00:10:55.124 00:10:55.124 ZNS Specific Controller Data 00:10:55.124 ============================ 00:10:55.124 Zone Append Size Limit: 0 00:10:55.124 00:10:55.124 00:10:55.124 Active Namespaces 00:10:55.124 ================= 00:10:55.124 Namespace ID:1 00:10:55.124 Error Recovery Timeout: Unlimited 00:10:55.124 Command Set Identifier: NVM (00h) 00:10:55.124 Deallocate: Supported 00:10:55.124 Deallocated/Unwritten Error: Supported 00:10:55.124 Deallocated Read Value: All 0x00 00:10:55.124 Deallocate in Write Zeroes: Not Supported 00:10:55.124 Deallocated Guard Field: 0xFFFF 00:10:55.124 Flush: Supported 00:10:55.124 Reservation: Not Supported 00:10:55.124 Namespace Sharing Capabilities: Private 00:10:55.124 Size (in LBAs): 1048576 (4GiB) 00:10:55.124 Capacity (in LBAs): 1048576 (4GiB) 00:10:55.124 Utilization (in LBAs): 1048576 (4GiB) 00:10:55.124 Thin Provisioning: Not Supported 00:10:55.124 Per-NS Atomic Units: No 00:10:55.124 Maximum Single Source Range Length: 128 00:10:55.124 Maximum Copy Length: 128 00:10:55.124 Maximum Source Range Count: 128 00:10:55.124 NGUID/EUI64 Never Reused: No 00:10:55.124 Namespace Write Protected: No 00:10:55.124 Number of LBA Formats: 8 00:10:55.124 Current LBA Format: LBA Format #04 00:10:55.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:55.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:55.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:55.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:55.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:55.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:55.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:55.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:55.124 00:10:55.124 NVM Specific Namespace Data 00:10:55.124 =========================== 00:10:55.124 Logical Block Storage Tag Mask: 0 00:10:55.124 Protection Information Capabilities: 00:10:55.124 16b Guard Protection Information Storage Tag Support: No 00:10:55.124 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:55.124 Storage Tag Check Read Support: No 00:10:55.124 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Namespace ID:2 00:10:55.124 Error Recovery Timeout: Unlimited 00:10:55.124 Command Set Identifier: NVM (00h) 00:10:55.124 Deallocate: Supported 00:10:55.124 Deallocated/Unwritten Error: Supported 00:10:55.124 Deallocated Read Value: All 0x00 00:10:55.124 Deallocate in Write Zeroes: Not Supported 00:10:55.124 Deallocated Guard Field: 0xFFFF 00:10:55.124 Flush: Supported 00:10:55.124 Reservation: Not Supported 00:10:55.124 Namespace Sharing Capabilities: Private 00:10:55.124 Size (in LBAs): 1048576 (4GiB) 00:10:55.124 Capacity (in LBAs): 1048576 (4GiB) 00:10:55.124 Utilization (in LBAs): 1048576 (4GiB) 00:10:55.124 Thin Provisioning: Not Supported 00:10:55.124 Per-NS Atomic Units: No 00:10:55.124 Maximum Single Source Range Length: 128 00:10:55.124 Maximum Copy Length: 128 00:10:55.124 Maximum Source Range Count: 128 00:10:55.124 NGUID/EUI64 Never Reused: No 00:10:55.124 Namespace Write Protected: No 00:10:55.124 Number of LBA Formats: 8 00:10:55.124 Current LBA Format: LBA Format #04 00:10:55.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:55.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:55.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:55.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:55.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:55.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:55.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:55.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:55.124 00:10:55.124 NVM Specific Namespace Data 00:10:55.124 =========================== 00:10:55.124 Logical Block Storage Tag Mask: 0 00:10:55.124 Protection Information Capabilities: 00:10:55.124 16b Guard Protection Information Storage Tag Support: No 00:10:55.124 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:55.124 Storage Tag Check Read Support: No 00:10:55.124 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Namespace ID:3 00:10:55.124 Error Recovery Timeout: Unlimited 00:10:55.124 Command Set Identifier: NVM (00h) 00:10:55.124 Deallocate: Supported 00:10:55.124 Deallocated/Unwritten Error: Supported 00:10:55.124 Deallocated Read Value: All 0x00 00:10:55.124 Deallocate in Write Zeroes: Not Supported 00:10:55.124 Deallocated Guard Field: 0xFFFF 00:10:55.124 Flush: Supported 00:10:55.124 Reservation: Not Supported 00:10:55.124 Namespace Sharing Capabilities: Private 00:10:55.124 Size (in LBAs): 1048576 (4GiB) 00:10:55.124 Capacity (in LBAs): 1048576 (4GiB) 00:10:55.124 Utilization (in LBAs): 1048576 (4GiB) 00:10:55.124 Thin Provisioning: Not Supported 00:10:55.124 Per-NS Atomic Units: No 00:10:55.124 Maximum Single Source Range Length: 128 00:10:55.124 Maximum Copy Length: 128 00:10:55.124 Maximum Source Range Count: 128 00:10:55.124 NGUID/EUI64 Never Reused: No 00:10:55.124 Namespace Write Protected: No 00:10:55.124 Number of LBA Formats: 8 00:10:55.124 Current LBA Format: LBA Format #04 00:10:55.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:55.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:55.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:55.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:55.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:55.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:55.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:55.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:55.124 00:10:55.124 NVM Specific Namespace Data 00:10:55.124 =========================== 00:10:55.124 Logical Block Storage Tag Mask: 0 00:10:55.124 Protection Information Capabilities: 00:10:55.124 16b Guard Protection Information Storage Tag Support: No 00:10:55.124 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:55.124 Storage Tag Check Read Support: No 00:10:55.124 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.124 08:23:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:55.124 08:23:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:55.383 ===================================================== 00:10:55.383 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:55.383 ===================================================== 00:10:55.383 Controller Capabilities/Features 00:10:55.383 ================================ 00:10:55.383 Vendor ID: 1b36 00:10:55.383 Subsystem Vendor ID: 1af4 00:10:55.383 Serial Number: 12343 00:10:55.383 Model Number: QEMU NVMe Ctrl 00:10:55.383 Firmware Version: 8.0.0 00:10:55.383 Recommended Arb Burst: 6 00:10:55.383 IEEE OUI Identifier: 00 54 52 00:10:55.383 Multi-path I/O 00:10:55.383 May have multiple subsystem ports: No 00:10:55.383 May have multiple controllers: Yes 00:10:55.383 Associated with SR-IOV VF: No 00:10:55.383 Max Data Transfer Size: 524288 00:10:55.383 Max Number of Namespaces: 256 00:10:55.383 Max Number of I/O Queues: 64 00:10:55.383 NVMe Specification Version (VS): 1.4 00:10:55.383 NVMe Specification Version (Identify): 1.4 00:10:55.383 Maximum Queue Entries: 2048 00:10:55.383 Contiguous Queues Required: Yes 00:10:55.383 Arbitration Mechanisms Supported 00:10:55.383 Weighted Round Robin: Not Supported 00:10:55.383 Vendor Specific: Not Supported 00:10:55.383 Reset Timeout: 7500 ms 00:10:55.383 Doorbell Stride: 4 bytes 00:10:55.383 NVM Subsystem Reset: Not Supported 00:10:55.383 Command Sets Supported 00:10:55.383 NVM Command Set: Supported 00:10:55.383 Boot Partition: Not Supported 00:10:55.383 Memory Page Size Minimum: 4096 bytes 00:10:55.383 Memory Page Size Maximum: 65536 bytes 00:10:55.383 Persistent Memory Region: Not Supported 00:10:55.383 Optional Asynchronous Events Supported 00:10:55.383 Namespace Attribute Notices: Supported 00:10:55.383 Firmware Activation Notices: Not Supported 00:10:55.383 ANA Change Notices: Not Supported 00:10:55.383 PLE Aggregate Log Change Notices: Not Supported 00:10:55.383 LBA Status Info Alert Notices: Not Supported 00:10:55.383 EGE Aggregate Log Change Notices: Not Supported 00:10:55.383 Normal NVM Subsystem Shutdown event: Not Supported 00:10:55.383 Zone Descriptor Change Notices: Not Supported 00:10:55.383 Discovery Log Change Notices: Not Supported 00:10:55.383 Controller Attributes 00:10:55.383 128-bit Host Identifier: Not Supported 00:10:55.383 Non-Operational Permissive Mode: Not Supported 00:10:55.383 NVM Sets: Not Supported 00:10:55.383 Read Recovery Levels: Not Supported 00:10:55.383 Endurance Groups: Supported 00:10:55.383 Predictable Latency Mode: Not Supported 00:10:55.383 Traffic Based Keep ALive: Not Supported 00:10:55.383 Namespace Granularity: Not Supported 00:10:55.383 SQ Associations: Not Supported 00:10:55.383 UUID List: Not Supported 00:10:55.383 Multi-Domain Subsystem: Not Supported 00:10:55.383 Fixed Capacity Management: Not Supported 00:10:55.383 Variable Capacity Management: Not Supported 00:10:55.383 Delete Endurance Group: Not Supported 00:10:55.383 Delete NVM Set: Not Supported 00:10:55.383 Extended LBA Formats Supported: Supported 00:10:55.383 Flexible Data Placement Supported: Supported 00:10:55.383 00:10:55.383 Controller Memory Buffer Support 00:10:55.383 ================================ 00:10:55.383 Supported: No 00:10:55.383 00:10:55.383 Persistent Memory Region Support 00:10:55.383 ================================ 00:10:55.383 Supported: No 00:10:55.383 00:10:55.383 Admin Command Set Attributes 00:10:55.383 ============================ 00:10:55.383 Security Send/Receive: Not Supported 00:10:55.383 Format NVM: Supported 00:10:55.383 Firmware Activate/Download: Not Supported 00:10:55.383 Namespace Management: Supported 00:10:55.383 Device Self-Test: Not Supported 00:10:55.383 Directives: Supported 00:10:55.383 NVMe-MI: Not Supported 00:10:55.383 Virtualization Management: Not Supported 00:10:55.383 Doorbell Buffer Config: Supported 00:10:55.384 Get LBA Status Capability: Not Supported 00:10:55.384 Command & Feature Lockdown Capability: Not Supported 00:10:55.384 Abort Command Limit: 4 00:10:55.384 Async Event Request Limit: 4 00:10:55.384 Number of Firmware Slots: N/A 00:10:55.384 Firmware Slot 1 Read-Only: N/A 00:10:55.384 Firmware Activation Without Reset: N/A 00:10:55.384 Multiple Update Detection Support: N/A 00:10:55.384 Firmware Update Granularity: No Information Provided 00:10:55.384 Per-Namespace SMART Log: Yes 00:10:55.384 Asymmetric Namespace Access Log Page: Not Supported 00:10:55.384 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:55.384 Command Effects Log Page: Supported 00:10:55.384 Get Log Page Extended Data: Supported 00:10:55.384 Telemetry Log Pages: Not Supported 00:10:55.384 Persistent Event Log Pages: Not Supported 00:10:55.384 Supported Log Pages Log Page: May Support 00:10:55.384 Commands Supported & Effects Log Page: Not Supported 00:10:55.384 Feature Identifiers & Effects Log Page:May Support 00:10:55.384 NVMe-MI Commands & Effects Log Page: May Support 00:10:55.384 Data Area 4 for Telemetry Log: Not Supported 00:10:55.384 Error Log Page Entries Supported: 1 00:10:55.384 Keep Alive: Not Supported 00:10:55.384 00:10:55.384 NVM Command Set Attributes 00:10:55.384 ========================== 00:10:55.384 Submission Queue Entry Size 00:10:55.384 Max: 64 00:10:55.384 Min: 64 00:10:55.384 Completion Queue Entry Size 00:10:55.384 Max: 16 00:10:55.384 Min: 16 00:10:55.384 Number of Namespaces: 256 00:10:55.384 Compare Command: Supported 00:10:55.384 Write Uncorrectable Command: Not Supported 00:10:55.384 Dataset Management Command: Supported 00:10:55.384 Write Zeroes Command: Supported 00:10:55.384 Set Features Save Field: Supported 00:10:55.384 Reservations: Not Supported 00:10:55.384 Timestamp: Supported 00:10:55.384 Copy: Supported 00:10:55.384 Volatile Write Cache: Present 00:10:55.384 Atomic Write Unit (Normal): 1 00:10:55.384 Atomic Write Unit (PFail): 1 00:10:55.384 Atomic Compare & Write Unit: 1 00:10:55.384 Fused Compare & Write: Not Supported 00:10:55.384 Scatter-Gather List 00:10:55.384 SGL Command Set: Supported 00:10:55.384 SGL Keyed: Not Supported 00:10:55.384 SGL Bit Bucket Descriptor: Not Supported 00:10:55.384 SGL Metadata Pointer: Not Supported 00:10:55.384 Oversized SGL: Not Supported 00:10:55.384 SGL Metadata Address: Not Supported 00:10:55.384 SGL Offset: Not Supported 00:10:55.384 Transport SGL Data Block: Not Supported 00:10:55.384 Replay Protected Memory Block: Not Supported 00:10:55.384 00:10:55.384 Firmware Slot Information 00:10:55.384 ========================= 00:10:55.384 Active slot: 1 00:10:55.384 Slot 1 Firmware Revision: 1.0 00:10:55.384 00:10:55.384 00:10:55.384 Commands Supported and Effects 00:10:55.384 ============================== 00:10:55.384 Admin Commands 00:10:55.384 -------------- 00:10:55.384 Delete I/O Submission Queue (00h): Supported 00:10:55.384 Create I/O Submission Queue (01h): Supported 00:10:55.384 Get Log Page (02h): Supported 00:10:55.384 Delete I/O Completion Queue (04h): Supported 00:10:55.384 Create I/O Completion Queue (05h): Supported 00:10:55.384 Identify (06h): Supported 00:10:55.384 Abort (08h): Supported 00:10:55.384 Set Features (09h): Supported 00:10:55.384 Get Features (0Ah): Supported 00:10:55.384 Asynchronous Event Request (0Ch): Supported 00:10:55.384 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:55.384 Directive Send (19h): Supported 00:10:55.384 Directive Receive (1Ah): Supported 00:10:55.384 Virtualization Management (1Ch): Supported 00:10:55.384 Doorbell Buffer Config (7Ch): Supported 00:10:55.384 Format NVM (80h): Supported LBA-Change 00:10:55.384 I/O Commands 00:10:55.384 ------------ 00:10:55.384 Flush (00h): Supported LBA-Change 00:10:55.384 Write (01h): Supported LBA-Change 00:10:55.384 Read (02h): Supported 00:10:55.384 Compare (05h): Supported 00:10:55.384 Write Zeroes (08h): Supported LBA-Change 00:10:55.384 Dataset Management (09h): Supported LBA-Change 00:10:55.384 Unknown (0Ch): Supported 00:10:55.384 Unknown (12h): Supported 00:10:55.384 Copy (19h): Supported LBA-Change 00:10:55.384 Unknown (1Dh): Supported LBA-Change 00:10:55.384 00:10:55.384 Error Log 00:10:55.384 ========= 00:10:55.384 00:10:55.384 Arbitration 00:10:55.384 =========== 00:10:55.384 Arbitration Burst: no limit 00:10:55.384 00:10:55.384 Power Management 00:10:55.384 ================ 00:10:55.384 Number of Power States: 1 00:10:55.384 Current Power State: Power State #0 00:10:55.384 Power State #0: 00:10:55.384 Max Power: 25.00 W 00:10:55.384 Non-Operational State: Operational 00:10:55.384 Entry Latency: 16 microseconds 00:10:55.384 Exit Latency: 4 microseconds 00:10:55.384 Relative Read Throughput: 0 00:10:55.384 Relative Read Latency: 0 00:10:55.384 Relative Write Throughput: 0 00:10:55.384 Relative Write Latency: 0 00:10:55.384 Idle Power: Not Reported 00:10:55.384 Active Power: Not Reported 00:10:55.384 Non-Operational Permissive Mode: Not Supported 00:10:55.384 00:10:55.384 Health Information 00:10:55.384 ================== 00:10:55.384 Critical Warnings: 00:10:55.384 Available Spare Space: OK 00:10:55.384 Temperature: OK 00:10:55.384 Device Reliability: OK 00:10:55.384 Read Only: No 00:10:55.384 Volatile Memory Backup: OK 00:10:55.384 Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.384 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:55.384 Available Spare: 0% 00:10:55.384 Available Spare Threshold: 0% 00:10:55.384 Life Percentage Used: 0% 00:10:55.384 Data Units Read: 832 00:10:55.384 Data Units Written: 765 00:10:55.384 Host Read Commands: 36690 00:10:55.384 Host Write Commands: 36211 00:10:55.384 Controller Busy Time: 0 minutes 00:10:55.384 Power Cycles: 0 00:10:55.384 Power On Hours: 0 hours 00:10:55.384 Unsafe Shutdowns: 0 00:10:55.384 Unrecoverable Media Errors: 0 00:10:55.384 Lifetime Error Log Entries: 0 00:10:55.384 Warning Temperature Time: 0 minutes 00:10:55.384 Critical Temperature Time: 0 minutes 00:10:55.384 00:10:55.384 Number of Queues 00:10:55.384 ================ 00:10:55.384 Number of I/O Submission Queues: 64 00:10:55.384 Number of I/O Completion Queues: 64 00:10:55.384 00:10:55.384 ZNS Specific Controller Data 00:10:55.384 ============================ 00:10:55.384 Zone Append Size Limit: 0 00:10:55.384 00:10:55.384 00:10:55.384 Active Namespaces 00:10:55.384 ================= 00:10:55.384 Namespace ID:1 00:10:55.384 Error Recovery Timeout: Unlimited 00:10:55.384 Command Set Identifier: NVM (00h) 00:10:55.384 Deallocate: Supported 00:10:55.384 Deallocated/Unwritten Error: Supported 00:10:55.384 Deallocated Read Value: All 0x00 00:10:55.384 Deallocate in Write Zeroes: Not Supported 00:10:55.384 Deallocated Guard Field: 0xFFFF 00:10:55.384 Flush: Supported 00:10:55.384 Reservation: Not Supported 00:10:55.384 Namespace Sharing Capabilities: Multiple Controllers 00:10:55.384 Size (in LBAs): 262144 (1GiB) 00:10:55.384 Capacity (in LBAs): 262144 (1GiB) 00:10:55.384 Utilization (in LBAs): 262144 (1GiB) 00:10:55.384 Thin Provisioning: Not Supported 00:10:55.384 Per-NS Atomic Units: No 00:10:55.384 Maximum Single Source Range Length: 128 00:10:55.384 Maximum Copy Length: 128 00:10:55.384 Maximum Source Range Count: 128 00:10:55.384 NGUID/EUI64 Never Reused: No 00:10:55.384 Namespace Write Protected: No 00:10:55.384 Endurance group ID: 1 00:10:55.384 Number of LBA Formats: 8 00:10:55.384 Current LBA Format: LBA Format #04 00:10:55.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:55.384 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:55.384 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:55.384 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:55.384 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:55.384 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:55.384 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:55.384 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:55.384 00:10:55.384 Get Feature FDP: 00:10:55.384 ================ 00:10:55.384 Enabled: Yes 00:10:55.384 FDP configuration index: 0 00:10:55.384 00:10:55.384 FDP configurations log page 00:10:55.384 =========================== 00:10:55.384 Number of FDP configurations: 1 00:10:55.384 Version: 0 00:10:55.384 Size: 112 00:10:55.384 FDP Configuration Descriptor: 0 00:10:55.384 Descriptor Size: 96 00:10:55.384 Reclaim Group Identifier format: 2 00:10:55.384 FDP Volatile Write Cache: Not Present 00:10:55.384 FDP Configuration: Valid 00:10:55.384 Vendor Specific Size: 0 00:10:55.384 Number of Reclaim Groups: 2 00:10:55.384 Number of Recalim Unit Handles: 8 00:10:55.384 Max Placement Identifiers: 128 00:10:55.384 Number of Namespaces Suppprted: 256 00:10:55.384 Reclaim unit Nominal Size: 6000000 bytes 00:10:55.384 Estimated Reclaim Unit Time Limit: Not Reported 00:10:55.384 RUH Desc #000: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #001: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #002: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #003: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #004: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #005: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #006: RUH Type: Initially Isolated 00:10:55.384 RUH Desc #007: RUH Type: Initially Isolated 00:10:55.384 00:10:55.384 FDP reclaim unit handle usage log page 00:10:55.643 ====================================== 00:10:55.643 Number of Reclaim Unit Handles: 8 00:10:55.643 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:55.643 RUH Usage Desc #001: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #002: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #003: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #004: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #005: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #006: RUH Attributes: Unused 00:10:55.643 RUH Usage Desc #007: RUH Attributes: Unused 00:10:55.643 00:10:55.643 FDP statistics log page 00:10:55.643 ======================= 00:10:55.643 Host bytes with metadata written: 499687424 00:10:55.643 Media bytes with metadata written: 499740672 00:10:55.643 Media bytes erased: 0 00:10:55.643 00:10:55.643 FDP events log page 00:10:55.643 =================== 00:10:55.643 Number of FDP events: 0 00:10:55.643 00:10:55.643 NVM Specific Namespace Data 00:10:55.643 =========================== 00:10:55.643 Logical Block Storage Tag Mask: 0 00:10:55.643 Protection Information Capabilities: 00:10:55.643 16b Guard Protection Information Storage Tag Support: No 00:10:55.643 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:55.643 Storage Tag Check Read Support: No 00:10:55.643 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:55.643 ************************************ 00:10:55.643 END TEST nvme_identify 00:10:55.643 ************************************ 00:10:55.643 00:10:55.643 real 0m1.728s 00:10:55.643 user 0m0.627s 00:10:55.643 sys 0m0.915s 00:10:55.643 08:23:42 nvme.nvme_identify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:55.643 08:23:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:55.643 08:23:43 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:55.643 08:23:43 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:55.643 08:23:43 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:55.643 08:23:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.643 ************************************ 00:10:55.643 START TEST nvme_perf 00:10:55.643 ************************************ 00:10:55.643 08:23:43 nvme.nvme_perf -- common/autotest_common.sh@1132 -- # nvme_perf 00:10:55.643 08:23:43 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:57.022 Initializing NVMe Controllers 00:10:57.022 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:57.022 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:57.022 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:57.022 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:57.022 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:57.022 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:57.022 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:57.022 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:57.022 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:57.022 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:57.022 Initialization complete. Launching workers. 00:10:57.022 ======================================================== 00:10:57.022 Latency(us) 00:10:57.022 Device Information : IOPS MiB/s Average min max 00:10:57.022 PCIE (0000:00:10.0) NSID 1 from core 0: 12872.94 150.85 9965.57 8017.04 55539.04 00:10:57.022 PCIE (0000:00:11.0) NSID 1 from core 0: 12872.94 150.85 9945.80 8160.02 53103.17 00:10:57.022 PCIE (0000:00:13.0) NSID 1 from core 0: 12872.94 150.85 9925.31 8127.14 51356.42 00:10:57.022 PCIE (0000:00:12.0) NSID 1 from core 0: 12872.94 150.85 9904.76 8123.95 48946.38 00:10:57.022 PCIE (0000:00:12.0) NSID 2 from core 0: 12872.94 150.85 9883.49 8114.54 46543.08 00:10:57.022 PCIE (0000:00:12.0) NSID 3 from core 0: 12936.67 151.60 9812.89 8094.17 38388.44 00:10:57.022 ======================================================== 00:10:57.022 Total : 77301.38 905.88 9906.23 8017.04 55539.04 00:10:57.022 00:10:57.022 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:57.022 ================================================================================= 00:10:57.022 1.00000% : 8264.379us 00:10:57.022 10.00000% : 8527.576us 00:10:57.022 25.00000% : 8790.773us 00:10:57.022 50.00000% : 9159.248us 00:10:57.022 75.00000% : 9527.724us 00:10:57.022 90.00000% : 10317.314us 00:10:57.022 95.00000% : 15265.414us 00:10:57.022 98.00000% : 19371.284us 00:10:57.022 99.00000% : 20950.464us 00:10:57.022 99.50000% : 48638.766us 00:10:57.022 99.90000% : 55166.047us 00:10:57.022 99.99000% : 55587.161us 00:10:57.022 99.99900% : 55587.161us 00:10:57.022 99.99990% : 55587.161us 00:10:57.022 99.99999% : 55587.161us 00:10:57.022 00:10:57.022 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:57.022 ================================================================================= 00:10:57.022 1.00000% : 8317.018us 00:10:57.022 10.00000% : 8580.215us 00:10:57.022 25.00000% : 8843.412us 00:10:57.022 50.00000% : 9159.248us 00:10:57.022 75.00000% : 9475.084us 00:10:57.022 90.00000% : 10527.871us 00:10:57.022 95.00000% : 14844.299us 00:10:57.022 98.00000% : 18529.054us 00:10:57.022 99.00000% : 20424.071us 00:10:57.022 99.50000% : 46112.077us 00:10:57.022 99.90000% : 52849.915us 00:10:57.022 99.99000% : 53271.030us 00:10:57.022 99.99900% : 53271.030us 00:10:57.022 99.99990% : 53271.030us 00:10:57.022 99.99999% : 53271.030us 00:10:57.022 00:10:57.022 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:57.022 ================================================================================= 00:10:57.022 1.00000% : 8369.658us 00:10:57.022 10.00000% : 8580.215us 00:10:57.022 25.00000% : 8843.412us 00:10:57.022 50.00000% : 9159.248us 00:10:57.022 75.00000% : 9475.084us 00:10:57.022 90.00000% : 10527.871us 00:10:57.022 95.00000% : 14212.627us 00:10:57.022 98.00000% : 18318.496us 00:10:57.022 99.00000% : 19581.841us 00:10:57.022 99.50000% : 44427.618us 00:10:57.022 99.90000% : 50954.898us 00:10:57.022 99.99000% : 51376.013us 00:10:57.022 99.99900% : 51376.013us 00:10:57.022 99.99990% : 51376.013us 00:10:57.023 99.99999% : 51376.013us 00:10:57.023 00:10:57.023 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:57.023 ================================================================================= 00:10:57.023 1.00000% : 8369.658us 00:10:57.023 10.00000% : 8632.855us 00:10:57.023 25.00000% : 8843.412us 00:10:57.023 50.00000% : 9159.248us 00:10:57.023 75.00000% : 9475.084us 00:10:57.023 90.00000% : 10422.593us 00:10:57.023 95.00000% : 14317.905us 00:10:57.023 98.00000% : 18423.775us 00:10:57.023 99.00000% : 19687.120us 00:10:57.023 99.50000% : 41900.929us 00:10:57.023 99.90000% : 48638.766us 00:10:57.023 99.99000% : 49059.881us 00:10:57.023 99.99900% : 49059.881us 00:10:57.023 99.99990% : 49059.881us 00:10:57.023 99.99999% : 49059.881us 00:10:57.023 00:10:57.023 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:57.023 ================================================================================= 00:10:57.023 1.00000% : 8317.018us 00:10:57.023 10.00000% : 8632.855us 00:10:57.023 25.00000% : 8843.412us 00:10:57.023 50.00000% : 9159.248us 00:10:57.023 75.00000% : 9475.084us 00:10:57.023 90.00000% : 10422.593us 00:10:57.023 95.00000% : 14423.184us 00:10:57.023 98.00000% : 18739.611us 00:10:57.023 99.00000% : 20424.071us 00:10:57.023 99.50000% : 39374.239us 00:10:57.023 99.90000% : 46322.635us 00:10:57.023 99.99000% : 46533.192us 00:10:57.023 99.99900% : 46743.749us 00:10:57.023 99.99990% : 46743.749us 00:10:57.023 99.99999% : 46743.749us 00:10:57.023 00:10:57.023 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:57.023 ================================================================================= 00:10:57.023 1.00000% : 8317.018us 00:10:57.023 10.00000% : 8632.855us 00:10:57.023 25.00000% : 8843.412us 00:10:57.023 50.00000% : 9159.248us 00:10:57.023 75.00000% : 9475.084us 00:10:57.023 90.00000% : 10475.232us 00:10:57.023 95.00000% : 15160.135us 00:10:57.023 98.00000% : 19266.005us 00:10:57.023 99.00000% : 20845.186us 00:10:57.023 99.50000% : 31794.172us 00:10:57.023 99.90000% : 38110.895us 00:10:57.023 99.99000% : 38532.010us 00:10:57.023 99.99900% : 38532.010us 00:10:57.023 99.99990% : 38532.010us 00:10:57.023 99.99999% : 38532.010us 00:10:57.023 00:10:57.023 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:57.023 ============================================================================== 00:10:57.023 Range in us Cumulative IO count 00:10:57.023 8001.182 - 8053.822: 0.0232% ( 3) 00:10:57.023 8053.822 - 8106.461: 0.0851% ( 8) 00:10:57.023 8106.461 - 8159.100: 0.2785% ( 25) 00:10:57.023 8159.100 - 8211.740: 0.7812% ( 65) 00:10:57.023 8211.740 - 8264.379: 1.4774% ( 90) 00:10:57.023 8264.379 - 8317.018: 2.6222% ( 148) 00:10:57.023 8317.018 - 8369.658: 3.9604% ( 173) 00:10:57.023 8369.658 - 8422.297: 5.7550% ( 232) 00:10:57.023 8422.297 - 8474.937: 7.8821% ( 275) 00:10:57.023 8474.937 - 8527.576: 10.1795% ( 297) 00:10:57.023 8527.576 - 8580.215: 12.7398% ( 331) 00:10:57.023 8580.215 - 8632.855: 15.6637% ( 378) 00:10:57.023 8632.855 - 8685.494: 18.8892% ( 417) 00:10:57.023 8685.494 - 8738.133: 22.0374% ( 407) 00:10:57.023 8738.133 - 8790.773: 25.6265% ( 464) 00:10:57.023 8790.773 - 8843.412: 29.3549% ( 482) 00:10:57.023 8843.412 - 8896.051: 33.0291% ( 475) 00:10:57.023 8896.051 - 8948.691: 36.8812% ( 498) 00:10:57.023 8948.691 - 9001.330: 40.8261% ( 510) 00:10:57.023 9001.330 - 9053.969: 44.8716% ( 523) 00:10:57.023 9053.969 - 9106.609: 49.2652% ( 568) 00:10:57.023 9106.609 - 9159.248: 53.4963% ( 547) 00:10:57.023 9159.248 - 9211.888: 57.5650% ( 526) 00:10:57.023 9211.888 - 9264.527: 61.4867% ( 507) 00:10:57.023 9264.527 - 9317.166: 65.0449% ( 460) 00:10:57.023 9317.166 - 9369.806: 68.3942% ( 433) 00:10:57.023 9369.806 - 9422.445: 71.4573% ( 396) 00:10:57.023 9422.445 - 9475.084: 74.0795% ( 339) 00:10:57.023 9475.084 - 9527.724: 76.5006% ( 313) 00:10:57.023 9527.724 - 9580.363: 78.6433% ( 277) 00:10:57.023 9580.363 - 9633.002: 80.5306% ( 244) 00:10:57.023 9633.002 - 9685.642: 82.2246% ( 219) 00:10:57.023 9685.642 - 9738.281: 83.7485% ( 197) 00:10:57.023 9738.281 - 9790.920: 85.0480% ( 168) 00:10:57.023 9790.920 - 9843.560: 86.0458% ( 129) 00:10:57.023 9843.560 - 9896.199: 86.8657% ( 106) 00:10:57.023 9896.199 - 9948.839: 87.5464% ( 88) 00:10:57.023 9948.839 - 10001.478: 88.3277% ( 101) 00:10:57.023 10001.478 - 10054.117: 88.7763% ( 58) 00:10:57.023 10054.117 - 10106.757: 89.2636% ( 63) 00:10:57.023 10106.757 - 10159.396: 89.5730% ( 40) 00:10:57.023 10159.396 - 10212.035: 89.7896% ( 28) 00:10:57.023 10212.035 - 10264.675: 89.9056% ( 15) 00:10:57.023 10264.675 - 10317.314: 90.0294% ( 16) 00:10:57.023 10317.314 - 10369.953: 90.1145% ( 11) 00:10:57.023 10369.953 - 10422.593: 90.2150% ( 13) 00:10:57.023 10422.593 - 10475.232: 90.3079% ( 12) 00:10:57.023 10475.232 - 10527.871: 90.4084% ( 13) 00:10:57.023 10527.871 - 10580.511: 90.5090% ( 13) 00:10:57.023 10580.511 - 10633.150: 90.5863% ( 10) 00:10:57.023 10633.150 - 10685.790: 90.6714% ( 11) 00:10:57.023 10685.790 - 10738.429: 90.7488% ( 10) 00:10:57.023 10738.429 - 10791.068: 90.8416% ( 12) 00:10:57.023 10791.068 - 10843.708: 90.9344% ( 12) 00:10:57.023 10843.708 - 10896.347: 91.0118% ( 10) 00:10:57.023 10896.347 - 10948.986: 91.1123% ( 13) 00:10:57.023 10948.986 - 11001.626: 91.1974% ( 11) 00:10:57.023 11001.626 - 11054.265: 91.3212% ( 16) 00:10:57.023 11054.265 - 11106.904: 91.4217% ( 13) 00:10:57.023 11106.904 - 11159.544: 91.5068% ( 11) 00:10:57.023 11159.544 - 11212.183: 91.6228% ( 15) 00:10:57.023 11212.183 - 11264.822: 91.7466% ( 16) 00:10:57.023 11264.822 - 11317.462: 91.8549% ( 14) 00:10:57.023 11317.462 - 11370.101: 91.9168% ( 8) 00:10:57.023 11370.101 - 11422.741: 91.9941% ( 10) 00:10:57.023 11422.741 - 11475.380: 92.0792% ( 11) 00:10:57.023 11475.380 - 11528.019: 92.1566% ( 10) 00:10:57.023 11528.019 - 11580.659: 92.2416% ( 11) 00:10:57.023 11580.659 - 11633.298: 92.2803% ( 5) 00:10:57.023 11633.298 - 11685.937: 92.3499% ( 9) 00:10:57.023 11685.937 - 11738.577: 92.4273% ( 10) 00:10:57.023 11738.577 - 11791.216: 92.4892% ( 8) 00:10:57.023 11791.216 - 11843.855: 92.5665% ( 10) 00:10:57.023 11843.855 - 11896.495: 92.6671% ( 13) 00:10:57.023 11896.495 - 11949.134: 92.7367% ( 9) 00:10:57.023 11949.134 - 12001.773: 92.8218% ( 11) 00:10:57.023 12001.773 - 12054.413: 92.9069% ( 11) 00:10:57.023 12054.413 - 12107.052: 93.0074% ( 13) 00:10:57.023 12107.052 - 12159.692: 93.0848% ( 10) 00:10:57.023 12159.692 - 12212.331: 93.1776% ( 12) 00:10:57.023 12212.331 - 12264.970: 93.2472% ( 9) 00:10:57.023 12264.970 - 12317.610: 93.3246% ( 10) 00:10:57.023 12317.610 - 12370.249: 93.4097% ( 11) 00:10:57.023 12370.249 - 12422.888: 93.4715% ( 8) 00:10:57.023 12422.888 - 12475.528: 93.5334% ( 8) 00:10:57.023 12475.528 - 12528.167: 93.6030% ( 9) 00:10:57.023 12528.167 - 12580.806: 93.6572% ( 7) 00:10:57.023 12580.806 - 12633.446: 93.7036% ( 6) 00:10:57.023 12633.446 - 12686.085: 93.7577% ( 7) 00:10:57.023 12686.085 - 12738.724: 93.8041% ( 6) 00:10:57.023 12738.724 - 12791.364: 93.8351% ( 4) 00:10:57.023 12791.364 - 12844.003: 93.8892% ( 7) 00:10:57.023 12844.003 - 12896.643: 93.9356% ( 6) 00:10:57.023 12896.643 - 12949.282: 93.9898% ( 7) 00:10:57.023 12949.282 - 13001.921: 94.0207% ( 4) 00:10:57.023 13001.921 - 13054.561: 94.0826% ( 8) 00:10:57.023 13054.561 - 13107.200: 94.1368% ( 7) 00:10:57.023 13107.200 - 13159.839: 94.1832% ( 6) 00:10:57.023 13159.839 - 13212.479: 94.2218% ( 5) 00:10:57.023 13212.479 - 13265.118: 94.2528% ( 4) 00:10:57.023 13265.118 - 13317.757: 94.2837% ( 4) 00:10:57.023 13317.757 - 13370.397: 94.3147% ( 4) 00:10:57.023 13370.397 - 13423.036: 94.3379% ( 3) 00:10:57.023 13423.036 - 13475.676: 94.3611% ( 3) 00:10:57.023 13475.676 - 13580.954: 94.4384% ( 10) 00:10:57.023 13580.954 - 13686.233: 94.4616% ( 3) 00:10:57.023 13686.233 - 13791.512: 94.4926% ( 4) 00:10:57.023 13791.512 - 13896.790: 94.5235% ( 4) 00:10:57.023 13896.790 - 14002.069: 94.5545% ( 4) 00:10:57.023 14212.627 - 14317.905: 94.5777% ( 3) 00:10:57.023 14317.905 - 14423.184: 94.6009% ( 3) 00:10:57.023 14423.184 - 14528.463: 94.6318% ( 4) 00:10:57.023 14528.463 - 14633.741: 94.6705% ( 5) 00:10:57.023 14633.741 - 14739.020: 94.7014% ( 4) 00:10:57.023 14739.020 - 14844.299: 94.7324% ( 4) 00:10:57.023 14844.299 - 14949.578: 94.7865% ( 7) 00:10:57.023 14949.578 - 15054.856: 94.8484% ( 8) 00:10:57.023 15054.856 - 15160.135: 94.9567% ( 14) 00:10:57.023 15160.135 - 15265.414: 95.0727% ( 15) 00:10:57.023 15265.414 - 15370.692: 95.1887% ( 15) 00:10:57.023 15370.692 - 15475.971: 95.2893% ( 13) 00:10:57.023 15475.971 - 15581.250: 95.3899% ( 13) 00:10:57.023 15581.250 - 15686.529: 95.5213% ( 17) 00:10:57.023 15686.529 - 15791.807: 95.6761% ( 20) 00:10:57.023 15791.807 - 15897.086: 95.8385% ( 21) 00:10:57.023 15897.086 - 16002.365: 95.9700% ( 17) 00:10:57.023 16002.365 - 16107.643: 96.1015% ( 17) 00:10:57.023 16107.643 - 16212.922: 96.2407% ( 18) 00:10:57.023 16212.922 - 16318.201: 96.3877% ( 19) 00:10:57.023 16318.201 - 16423.480: 96.5424% ( 20) 00:10:57.023 16423.480 - 16528.758: 96.6507% ( 14) 00:10:57.023 16528.758 - 16634.037: 96.7744% ( 16) 00:10:57.023 16634.037 - 16739.316: 96.8905% ( 15) 00:10:57.023 16739.316 - 16844.594: 96.9910% ( 13) 00:10:57.023 16844.594 - 16949.873: 97.1071% ( 15) 00:10:57.023 16949.873 - 17055.152: 97.2076% ( 13) 00:10:57.024 17055.152 - 17160.431: 97.3159% ( 14) 00:10:57.024 17160.431 - 17265.709: 97.4397% ( 16) 00:10:57.024 17265.709 - 17370.988: 97.5015% ( 8) 00:10:57.024 17370.988 - 17476.267: 97.5248% ( 3) 00:10:57.024 18107.939 - 18213.218: 97.5634% ( 5) 00:10:57.024 18213.218 - 18318.496: 97.6021% ( 5) 00:10:57.024 18318.496 - 18423.775: 97.6485% ( 6) 00:10:57.024 18423.775 - 18529.054: 97.6872% ( 5) 00:10:57.024 18529.054 - 18634.333: 97.7336% ( 6) 00:10:57.024 18634.333 - 18739.611: 97.7723% ( 5) 00:10:57.024 18739.611 - 18844.890: 97.8110% ( 5) 00:10:57.024 18844.890 - 18950.169: 97.8496% ( 5) 00:10:57.024 18950.169 - 19055.447: 97.8883% ( 5) 00:10:57.024 19055.447 - 19160.726: 97.9115% ( 3) 00:10:57.024 19160.726 - 19266.005: 97.9579% ( 6) 00:10:57.024 19266.005 - 19371.284: 98.0121% ( 7) 00:10:57.024 19371.284 - 19476.562: 98.0507% ( 5) 00:10:57.024 19476.562 - 19581.841: 98.0817% ( 4) 00:10:57.024 19581.841 - 19687.120: 98.1590% ( 10) 00:10:57.024 19687.120 - 19792.398: 98.2441% ( 11) 00:10:57.024 19792.398 - 19897.677: 98.3215% ( 10) 00:10:57.024 19897.677 - 20002.956: 98.3834% ( 8) 00:10:57.024 20002.956 - 20108.235: 98.4762% ( 12) 00:10:57.024 20108.235 - 20213.513: 98.5381% ( 8) 00:10:57.024 20213.513 - 20318.792: 98.6231% ( 11) 00:10:57.024 20318.792 - 20424.071: 98.7005% ( 10) 00:10:57.024 20424.071 - 20529.349: 98.7778% ( 10) 00:10:57.024 20529.349 - 20634.628: 98.8475% ( 9) 00:10:57.024 20634.628 - 20739.907: 98.9325% ( 11) 00:10:57.024 20739.907 - 20845.186: 98.9790% ( 6) 00:10:57.024 20845.186 - 20950.464: 99.0099% ( 4) 00:10:57.024 46112.077 - 46322.635: 99.0176% ( 1) 00:10:57.024 46322.635 - 46533.192: 99.0563% ( 5) 00:10:57.024 46533.192 - 46743.749: 99.1105% ( 7) 00:10:57.024 46743.749 - 46954.307: 99.1569% ( 6) 00:10:57.024 46954.307 - 47164.864: 99.2033% ( 6) 00:10:57.024 47164.864 - 47375.422: 99.2497% ( 6) 00:10:57.024 47375.422 - 47585.979: 99.2961% ( 6) 00:10:57.024 47585.979 - 47796.537: 99.3425% ( 6) 00:10:57.024 47796.537 - 48007.094: 99.3889% ( 6) 00:10:57.024 48007.094 - 48217.651: 99.4353% ( 6) 00:10:57.024 48217.651 - 48428.209: 99.4895% ( 7) 00:10:57.024 48428.209 - 48638.766: 99.5050% ( 2) 00:10:57.024 53271.030 - 53481.587: 99.5359% ( 4) 00:10:57.024 53481.587 - 53692.145: 99.5900% ( 7) 00:10:57.024 53692.145 - 53902.702: 99.6287% ( 5) 00:10:57.024 53902.702 - 54323.817: 99.7293% ( 13) 00:10:57.024 54323.817 - 54744.932: 99.8298% ( 13) 00:10:57.024 54744.932 - 55166.047: 99.9226% ( 12) 00:10:57.024 55166.047 - 55587.161: 100.0000% ( 10) 00:10:57.024 00:10:57.024 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:57.024 ============================================================================== 00:10:57.024 Range in us Cumulative IO count 00:10:57.024 8159.100 - 8211.740: 0.1315% ( 17) 00:10:57.024 8211.740 - 8264.379: 0.5337% ( 52) 00:10:57.024 8264.379 - 8317.018: 1.1603% ( 81) 00:10:57.024 8317.018 - 8369.658: 2.2355% ( 139) 00:10:57.024 8369.658 - 8422.297: 3.8985% ( 215) 00:10:57.024 8422.297 - 8474.937: 5.5074% ( 208) 00:10:57.024 8474.937 - 8527.576: 7.7661% ( 292) 00:10:57.024 8527.576 - 8580.215: 10.3960% ( 340) 00:10:57.024 8580.215 - 8632.855: 13.3354% ( 380) 00:10:57.024 8632.855 - 8685.494: 16.6460% ( 428) 00:10:57.024 8685.494 - 8738.133: 20.1037% ( 447) 00:10:57.024 8738.133 - 8790.773: 23.7624% ( 473) 00:10:57.024 8790.773 - 8843.412: 27.8079% ( 523) 00:10:57.024 8843.412 - 8896.051: 31.8920% ( 528) 00:10:57.024 8896.051 - 8948.691: 36.2005% ( 557) 00:10:57.024 8948.691 - 9001.330: 40.5786% ( 566) 00:10:57.024 9001.330 - 9053.969: 45.0804% ( 582) 00:10:57.024 9053.969 - 9106.609: 49.6829% ( 595) 00:10:57.024 9106.609 - 9159.248: 54.2543% ( 591) 00:10:57.024 9159.248 - 9211.888: 58.5164% ( 551) 00:10:57.024 9211.888 - 9264.527: 62.4845% ( 513) 00:10:57.024 9264.527 - 9317.166: 66.1433% ( 473) 00:10:57.024 9317.166 - 9369.806: 69.5777% ( 444) 00:10:57.024 9369.806 - 9422.445: 72.5634% ( 386) 00:10:57.024 9422.445 - 9475.084: 75.2166% ( 343) 00:10:57.024 9475.084 - 9527.724: 77.5449% ( 301) 00:10:57.024 9527.724 - 9580.363: 79.6875% ( 277) 00:10:57.024 9580.363 - 9633.002: 81.4356% ( 226) 00:10:57.024 9633.002 - 9685.642: 82.9285% ( 193) 00:10:57.024 9685.642 - 9738.281: 84.2590% ( 172) 00:10:57.024 9738.281 - 9790.920: 85.3264% ( 138) 00:10:57.024 9790.920 - 9843.560: 86.3011% ( 126) 00:10:57.024 9843.560 - 9896.199: 87.1287% ( 107) 00:10:57.024 9896.199 - 9948.839: 87.8713% ( 96) 00:10:57.024 9948.839 - 10001.478: 88.4205% ( 71) 00:10:57.024 10001.478 - 10054.117: 88.7918% ( 48) 00:10:57.024 10054.117 - 10106.757: 89.0857% ( 38) 00:10:57.024 10106.757 - 10159.396: 89.3023% ( 28) 00:10:57.024 10159.396 - 10212.035: 89.4106% ( 14) 00:10:57.024 10212.035 - 10264.675: 89.5111% ( 13) 00:10:57.024 10264.675 - 10317.314: 89.6194% ( 14) 00:10:57.024 10317.314 - 10369.953: 89.7432% ( 16) 00:10:57.024 10369.953 - 10422.593: 89.8438% ( 13) 00:10:57.024 10422.593 - 10475.232: 89.9520% ( 14) 00:10:57.024 10475.232 - 10527.871: 90.0603% ( 14) 00:10:57.024 10527.871 - 10580.511: 90.1609% ( 13) 00:10:57.024 10580.511 - 10633.150: 90.2537% ( 12) 00:10:57.024 10633.150 - 10685.790: 90.3852% ( 17) 00:10:57.024 10685.790 - 10738.429: 90.4935% ( 14) 00:10:57.024 10738.429 - 10791.068: 90.6095% ( 15) 00:10:57.024 10791.068 - 10843.708: 90.7256% ( 15) 00:10:57.024 10843.708 - 10896.347: 90.8261% ( 13) 00:10:57.024 10896.347 - 10948.986: 90.9189% ( 12) 00:10:57.024 10948.986 - 11001.626: 91.0040% ( 11) 00:10:57.024 11001.626 - 11054.265: 91.0736% ( 9) 00:10:57.024 11054.265 - 11106.904: 91.1587% ( 11) 00:10:57.024 11106.904 - 11159.544: 91.2283% ( 9) 00:10:57.024 11159.544 - 11212.183: 91.2980% ( 9) 00:10:57.024 11212.183 - 11264.822: 91.3908% ( 12) 00:10:57.024 11264.822 - 11317.462: 91.4449% ( 7) 00:10:57.024 11317.462 - 11370.101: 91.4913% ( 6) 00:10:57.024 11370.101 - 11422.741: 91.5532% ( 8) 00:10:57.024 11422.741 - 11475.380: 91.6074% ( 7) 00:10:57.024 11475.380 - 11528.019: 91.6615% ( 7) 00:10:57.024 11528.019 - 11580.659: 91.7311% ( 9) 00:10:57.024 11580.659 - 11633.298: 91.8162% ( 11) 00:10:57.024 11633.298 - 11685.937: 91.8858% ( 9) 00:10:57.024 11685.937 - 11738.577: 91.9709% ( 11) 00:10:57.024 11738.577 - 11791.216: 92.0637% ( 12) 00:10:57.024 11791.216 - 11843.855: 92.1411% ( 10) 00:10:57.024 11843.855 - 11896.495: 92.2184% ( 10) 00:10:57.024 11896.495 - 11949.134: 92.3035% ( 11) 00:10:57.024 11949.134 - 12001.773: 92.4273% ( 16) 00:10:57.024 12001.773 - 12054.413: 92.5511% ( 16) 00:10:57.024 12054.413 - 12107.052: 92.6593% ( 14) 00:10:57.024 12107.052 - 12159.692: 92.7908% ( 17) 00:10:57.024 12159.692 - 12212.331: 92.8914% ( 13) 00:10:57.024 12212.331 - 12264.970: 93.0152% ( 16) 00:10:57.024 12264.970 - 12317.610: 93.1002% ( 11) 00:10:57.024 12317.610 - 12370.249: 93.2085% ( 14) 00:10:57.024 12370.249 - 12422.888: 93.3091% ( 13) 00:10:57.024 12422.888 - 12475.528: 93.4097% ( 13) 00:10:57.024 12475.528 - 12528.167: 93.5102% ( 13) 00:10:57.024 12528.167 - 12580.806: 93.6108% ( 13) 00:10:57.024 12580.806 - 12633.446: 93.7113% ( 13) 00:10:57.024 12633.446 - 12686.085: 93.8119% ( 13) 00:10:57.024 12686.085 - 12738.724: 93.9202% ( 14) 00:10:57.024 12738.724 - 12791.364: 93.9898% ( 9) 00:10:57.024 12791.364 - 12844.003: 94.0749% ( 11) 00:10:57.024 12844.003 - 12896.643: 94.1445% ( 9) 00:10:57.024 12896.643 - 12949.282: 94.2064% ( 8) 00:10:57.024 12949.282 - 13001.921: 94.2450% ( 5) 00:10:57.024 13001.921 - 13054.561: 94.2760% ( 4) 00:10:57.024 13054.561 - 13107.200: 94.3147% ( 5) 00:10:57.024 13107.200 - 13159.839: 94.3379% ( 3) 00:10:57.024 13159.839 - 13212.479: 94.3765% ( 5) 00:10:57.024 13212.479 - 13265.118: 94.4075% ( 4) 00:10:57.024 13265.118 - 13317.757: 94.4462% ( 5) 00:10:57.024 13317.757 - 13370.397: 94.4848% ( 5) 00:10:57.024 13370.397 - 13423.036: 94.5003% ( 2) 00:10:57.024 13423.036 - 13475.676: 94.5158% ( 2) 00:10:57.024 13475.676 - 13580.954: 94.5545% ( 5) 00:10:57.024 13686.233 - 13791.512: 94.5777% ( 3) 00:10:57.024 13791.512 - 13896.790: 94.6086% ( 4) 00:10:57.024 13896.790 - 14002.069: 94.6473% ( 5) 00:10:57.024 14002.069 - 14107.348: 94.6860% ( 5) 00:10:57.024 14107.348 - 14212.627: 94.7246% ( 5) 00:10:57.024 14212.627 - 14317.905: 94.7633% ( 5) 00:10:57.024 14317.905 - 14423.184: 94.8020% ( 5) 00:10:57.024 14423.184 - 14528.463: 94.8329% ( 4) 00:10:57.024 14528.463 - 14633.741: 94.9025% ( 9) 00:10:57.024 14633.741 - 14739.020: 94.9644% ( 8) 00:10:57.024 14739.020 - 14844.299: 95.0495% ( 11) 00:10:57.024 14844.299 - 14949.578: 95.1269% ( 10) 00:10:57.024 14949.578 - 15054.856: 95.2119% ( 11) 00:10:57.024 15054.856 - 15160.135: 95.2816% ( 9) 00:10:57.024 15160.135 - 15265.414: 95.3202% ( 5) 00:10:57.024 15265.414 - 15370.692: 95.3666% ( 6) 00:10:57.024 15370.692 - 15475.971: 95.4440% ( 10) 00:10:57.024 15475.971 - 15581.250: 95.5136% ( 9) 00:10:57.024 15581.250 - 15686.529: 95.5910% ( 10) 00:10:57.024 15686.529 - 15791.807: 95.6683% ( 10) 00:10:57.024 15791.807 - 15897.086: 95.7457% ( 10) 00:10:57.024 15897.086 - 16002.365: 95.8385% ( 12) 00:10:57.024 16002.365 - 16107.643: 95.9236% ( 11) 00:10:57.024 16107.643 - 16212.922: 96.0087% ( 11) 00:10:57.024 16212.922 - 16318.201: 96.1015% ( 12) 00:10:57.024 16318.201 - 16423.480: 96.1943% ( 12) 00:10:57.024 16423.480 - 16528.758: 96.2794% ( 11) 00:10:57.024 16528.758 - 16634.037: 96.3954% ( 15) 00:10:57.024 16634.037 - 16739.316: 96.5192% ( 16) 00:10:57.024 16739.316 - 16844.594: 96.6352% ( 15) 00:10:57.024 16844.594 - 16949.873: 96.6971% ( 8) 00:10:57.024 16949.873 - 17055.152: 96.7899% ( 12) 00:10:57.025 17055.152 - 17160.431: 96.8827% ( 12) 00:10:57.025 17160.431 - 17265.709: 96.9601% ( 10) 00:10:57.025 17265.709 - 17370.988: 97.0529% ( 12) 00:10:57.025 17370.988 - 17476.267: 97.1380% ( 11) 00:10:57.025 17476.267 - 17581.545: 97.2308% ( 12) 00:10:57.025 17581.545 - 17686.824: 97.3546% ( 16) 00:10:57.025 17686.824 - 17792.103: 97.4861% ( 17) 00:10:57.025 17792.103 - 17897.382: 97.6021% ( 15) 00:10:57.025 17897.382 - 18002.660: 97.6949% ( 12) 00:10:57.025 18002.660 - 18107.939: 97.7645% ( 9) 00:10:57.025 18107.939 - 18213.218: 97.8110% ( 6) 00:10:57.025 18213.218 - 18318.496: 97.8574% ( 6) 00:10:57.025 18318.496 - 18423.775: 97.9347% ( 10) 00:10:57.025 18423.775 - 18529.054: 98.0430% ( 14) 00:10:57.025 18529.054 - 18634.333: 98.1281% ( 11) 00:10:57.025 18634.333 - 18739.611: 98.2054% ( 10) 00:10:57.025 18739.611 - 18844.890: 98.2596% ( 7) 00:10:57.025 18844.890 - 18950.169: 98.3137% ( 7) 00:10:57.025 18950.169 - 19055.447: 98.3756% ( 8) 00:10:57.025 19055.447 - 19160.726: 98.4684% ( 12) 00:10:57.025 19160.726 - 19266.005: 98.5690% ( 13) 00:10:57.025 19266.005 - 19371.284: 98.6463% ( 10) 00:10:57.025 19371.284 - 19476.562: 98.6850% ( 5) 00:10:57.025 19476.562 - 19581.841: 98.7237% ( 5) 00:10:57.025 19581.841 - 19687.120: 98.7624% ( 5) 00:10:57.025 19687.120 - 19792.398: 98.8011% ( 5) 00:10:57.025 19792.398 - 19897.677: 98.8397% ( 5) 00:10:57.025 19897.677 - 20002.956: 98.8784% ( 5) 00:10:57.025 20002.956 - 20108.235: 98.9248% ( 6) 00:10:57.025 20108.235 - 20213.513: 98.9558% ( 4) 00:10:57.025 20213.513 - 20318.792: 98.9944% ( 5) 00:10:57.025 20318.792 - 20424.071: 99.0099% ( 2) 00:10:57.025 44006.503 - 44217.060: 99.0563% ( 6) 00:10:57.025 44217.060 - 44427.618: 99.1027% ( 6) 00:10:57.025 44427.618 - 44638.175: 99.1569% ( 7) 00:10:57.025 44638.175 - 44848.733: 99.2110% ( 7) 00:10:57.025 44848.733 - 45059.290: 99.2574% ( 6) 00:10:57.025 45059.290 - 45269.847: 99.3116% ( 7) 00:10:57.025 45269.847 - 45480.405: 99.3580% ( 6) 00:10:57.025 45480.405 - 45690.962: 99.4121% ( 7) 00:10:57.025 45690.962 - 45901.520: 99.4585% ( 6) 00:10:57.025 45901.520 - 46112.077: 99.5050% ( 6) 00:10:57.025 50954.898 - 51165.455: 99.5359% ( 4) 00:10:57.025 51165.455 - 51376.013: 99.5900% ( 7) 00:10:57.025 51376.013 - 51586.570: 99.6364% ( 6) 00:10:57.025 51586.570 - 51797.128: 99.6829% ( 6) 00:10:57.025 51797.128 - 52007.685: 99.7293% ( 6) 00:10:57.025 52007.685 - 52218.243: 99.7834% ( 7) 00:10:57.025 52218.243 - 52428.800: 99.8298% ( 6) 00:10:57.025 52428.800 - 52639.357: 99.8840% ( 7) 00:10:57.025 52639.357 - 52849.915: 99.9381% ( 7) 00:10:57.025 52849.915 - 53060.472: 99.9845% ( 6) 00:10:57.025 53060.472 - 53271.030: 100.0000% ( 2) 00:10:57.025 00:10:57.025 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:57.025 ============================================================================== 00:10:57.025 Range in us Cumulative IO count 00:10:57.025 8106.461 - 8159.100: 0.0232% ( 3) 00:10:57.025 8159.100 - 8211.740: 0.1238% ( 13) 00:10:57.025 8211.740 - 8264.379: 0.3636% ( 31) 00:10:57.025 8264.379 - 8317.018: 0.9669% ( 78) 00:10:57.025 8317.018 - 8369.658: 2.1581% ( 154) 00:10:57.025 8369.658 - 8422.297: 3.6123% ( 188) 00:10:57.025 8422.297 - 8474.937: 5.5461% ( 250) 00:10:57.025 8474.937 - 8527.576: 7.7738% ( 288) 00:10:57.025 8527.576 - 8580.215: 10.1795% ( 311) 00:10:57.025 8580.215 - 8632.855: 13.1575% ( 385) 00:10:57.025 8632.855 - 8685.494: 16.4449% ( 425) 00:10:57.025 8685.494 - 8738.133: 19.8020% ( 434) 00:10:57.025 8738.133 - 8790.773: 23.5535% ( 485) 00:10:57.025 8790.773 - 8843.412: 27.4366% ( 502) 00:10:57.025 8843.412 - 8896.051: 31.5981% ( 538) 00:10:57.025 8896.051 - 8948.691: 35.8447% ( 549) 00:10:57.025 8948.691 - 9001.330: 40.3233% ( 579) 00:10:57.025 9001.330 - 9053.969: 44.9489% ( 598) 00:10:57.025 9053.969 - 9106.609: 49.6597% ( 609) 00:10:57.025 9106.609 - 9159.248: 54.4090% ( 614) 00:10:57.025 9159.248 - 9211.888: 58.7871% ( 566) 00:10:57.025 9211.888 - 9264.527: 62.8249% ( 522) 00:10:57.025 9264.527 - 9317.166: 66.5223% ( 478) 00:10:57.025 9317.166 - 9369.806: 69.8639% ( 432) 00:10:57.025 9369.806 - 9422.445: 72.8187% ( 382) 00:10:57.025 9422.445 - 9475.084: 75.5956% ( 359) 00:10:57.025 9475.084 - 9527.724: 78.0554% ( 318) 00:10:57.025 9527.724 - 9580.363: 80.1207% ( 267) 00:10:57.025 9580.363 - 9633.002: 81.9307% ( 234) 00:10:57.025 9633.002 - 9685.642: 83.5009% ( 203) 00:10:57.025 9685.642 - 9738.281: 84.8159% ( 170) 00:10:57.025 9738.281 - 9790.920: 85.9994% ( 153) 00:10:57.025 9790.920 - 9843.560: 86.8967% ( 116) 00:10:57.025 9843.560 - 9896.199: 87.6315% ( 95) 00:10:57.025 9896.199 - 9948.839: 88.2194% ( 76) 00:10:57.025 9948.839 - 10001.478: 88.7222% ( 65) 00:10:57.025 10001.478 - 10054.117: 89.1166% ( 51) 00:10:57.025 10054.117 - 10106.757: 89.3642% ( 32) 00:10:57.025 10106.757 - 10159.396: 89.4879% ( 16) 00:10:57.025 10159.396 - 10212.035: 89.5730% ( 11) 00:10:57.025 10212.035 - 10264.675: 89.6504% ( 10) 00:10:57.025 10264.675 - 10317.314: 89.7277% ( 10) 00:10:57.025 10317.314 - 10369.953: 89.8051% ( 10) 00:10:57.025 10369.953 - 10422.593: 89.8979% ( 12) 00:10:57.025 10422.593 - 10475.232: 89.9830% ( 11) 00:10:57.025 10475.232 - 10527.871: 90.0681% ( 11) 00:10:57.025 10527.871 - 10580.511: 90.1686% ( 13) 00:10:57.025 10580.511 - 10633.150: 90.2537% ( 11) 00:10:57.025 10633.150 - 10685.790: 90.3543% ( 13) 00:10:57.025 10685.790 - 10738.429: 90.4316% ( 10) 00:10:57.025 10738.429 - 10791.068: 90.5244% ( 12) 00:10:57.025 10791.068 - 10843.708: 90.5941% ( 9) 00:10:57.025 10843.708 - 10896.347: 90.6714% ( 10) 00:10:57.025 10896.347 - 10948.986: 90.7410% ( 9) 00:10:57.025 10948.986 - 11001.626: 90.8261% ( 11) 00:10:57.025 11001.626 - 11054.265: 90.8803% ( 7) 00:10:57.025 11054.265 - 11106.904: 90.9421% ( 8) 00:10:57.025 11106.904 - 11159.544: 90.9963% ( 7) 00:10:57.025 11159.544 - 11212.183: 91.0427% ( 6) 00:10:57.025 11212.183 - 11264.822: 91.1046% ( 8) 00:10:57.025 11264.822 - 11317.462: 91.1665% ( 8) 00:10:57.025 11317.462 - 11370.101: 91.2206% ( 7) 00:10:57.025 11370.101 - 11422.741: 91.3057% ( 11) 00:10:57.025 11422.741 - 11475.380: 91.3985% ( 12) 00:10:57.025 11475.380 - 11528.019: 91.4836% ( 11) 00:10:57.025 11528.019 - 11580.659: 91.5919% ( 14) 00:10:57.025 11580.659 - 11633.298: 91.6925% ( 13) 00:10:57.025 11633.298 - 11685.937: 91.7775% ( 11) 00:10:57.025 11685.937 - 11738.577: 91.9245% ( 19) 00:10:57.025 11738.577 - 11791.216: 92.0405% ( 15) 00:10:57.025 11791.216 - 11843.855: 92.1334% ( 12) 00:10:57.025 11843.855 - 11896.495: 92.2416% ( 14) 00:10:57.025 11896.495 - 11949.134: 92.3577% ( 15) 00:10:57.025 11949.134 - 12001.773: 92.4660% ( 14) 00:10:57.025 12001.773 - 12054.413: 92.5588% ( 12) 00:10:57.025 12054.413 - 12107.052: 92.6593% ( 13) 00:10:57.025 12107.052 - 12159.692: 92.7444% ( 11) 00:10:57.025 12159.692 - 12212.331: 92.8373% ( 12) 00:10:57.025 12212.331 - 12264.970: 92.9223% ( 11) 00:10:57.025 12264.970 - 12317.610: 93.0152% ( 12) 00:10:57.025 12317.610 - 12370.249: 93.1157% ( 13) 00:10:57.025 12370.249 - 12422.888: 93.2240% ( 14) 00:10:57.025 12422.888 - 12475.528: 93.3323% ( 14) 00:10:57.025 12475.528 - 12528.167: 93.4019% ( 9) 00:10:57.025 12528.167 - 12580.806: 93.4406% ( 5) 00:10:57.025 12580.806 - 12633.446: 93.4947% ( 7) 00:10:57.025 12633.446 - 12686.085: 93.5412% ( 6) 00:10:57.025 12686.085 - 12738.724: 93.6030% ( 8) 00:10:57.025 12738.724 - 12791.364: 93.6649% ( 8) 00:10:57.025 12791.364 - 12844.003: 93.7191% ( 7) 00:10:57.025 12844.003 - 12896.643: 93.7655% ( 6) 00:10:57.025 12896.643 - 12949.282: 93.8041% ( 5) 00:10:57.025 12949.282 - 13001.921: 93.8428% ( 5) 00:10:57.025 13001.921 - 13054.561: 93.8892% ( 6) 00:10:57.025 13054.561 - 13107.200: 93.9202% ( 4) 00:10:57.025 13107.200 - 13159.839: 93.9821% ( 8) 00:10:57.025 13159.839 - 13212.479: 94.0517% ( 9) 00:10:57.025 13212.479 - 13265.118: 94.1136% ( 8) 00:10:57.025 13265.118 - 13317.757: 94.1832% ( 9) 00:10:57.025 13317.757 - 13370.397: 94.2528% ( 9) 00:10:57.025 13370.397 - 13423.036: 94.3224% ( 9) 00:10:57.025 13423.036 - 13475.676: 94.3611% ( 5) 00:10:57.025 13475.676 - 13580.954: 94.4539% ( 12) 00:10:57.025 13580.954 - 13686.233: 94.5467% ( 12) 00:10:57.025 13686.233 - 13791.512: 94.6395% ( 12) 00:10:57.025 13791.512 - 13896.790: 94.7324% ( 12) 00:10:57.025 13896.790 - 14002.069: 94.8252% ( 12) 00:10:57.025 14002.069 - 14107.348: 94.9412% ( 15) 00:10:57.025 14107.348 - 14212.627: 95.0108% ( 9) 00:10:57.025 14212.627 - 14317.905: 95.0882% ( 10) 00:10:57.025 14317.905 - 14423.184: 95.1655% ( 10) 00:10:57.025 14423.184 - 14528.463: 95.2584% ( 12) 00:10:57.025 14528.463 - 14633.741: 95.3434% ( 11) 00:10:57.025 14633.741 - 14739.020: 95.4053% ( 8) 00:10:57.025 14739.020 - 14844.299: 95.4827% ( 10) 00:10:57.025 14844.299 - 14949.578: 95.5600% ( 10) 00:10:57.025 14949.578 - 15054.856: 95.6374% ( 10) 00:10:57.025 15054.856 - 15160.135: 95.7070% ( 9) 00:10:57.025 15160.135 - 15265.414: 95.7766% ( 9) 00:10:57.025 15265.414 - 15370.692: 95.8385% ( 8) 00:10:57.025 15370.692 - 15475.971: 95.8694% ( 4) 00:10:57.025 15475.971 - 15581.250: 95.9081% ( 5) 00:10:57.025 15581.250 - 15686.529: 95.9468% ( 5) 00:10:57.025 15686.529 - 15791.807: 95.9777% ( 4) 00:10:57.025 15791.807 - 15897.086: 96.0164% ( 5) 00:10:57.025 15897.086 - 16002.365: 96.0396% ( 3) 00:10:57.025 16739.316 - 16844.594: 96.0628% ( 3) 00:10:57.025 16844.594 - 16949.873: 96.1092% ( 6) 00:10:57.025 16949.873 - 17055.152: 96.1402% ( 4) 00:10:57.025 17055.152 - 17160.431: 96.2020% ( 8) 00:10:57.025 17160.431 - 17265.709: 96.2717% ( 9) 00:10:57.026 17265.709 - 17370.988: 96.3800% ( 14) 00:10:57.026 17370.988 - 17476.267: 96.4728% ( 12) 00:10:57.026 17476.267 - 17581.545: 96.6352% ( 21) 00:10:57.026 17581.545 - 17686.824: 96.8286% ( 25) 00:10:57.026 17686.824 - 17792.103: 97.0916% ( 34) 00:10:57.026 17792.103 - 17897.382: 97.2927% ( 26) 00:10:57.026 17897.382 - 18002.660: 97.4551% ( 21) 00:10:57.026 18002.660 - 18107.939: 97.6485% ( 25) 00:10:57.026 18107.939 - 18213.218: 97.8110% ( 21) 00:10:57.026 18213.218 - 18318.496: 98.0043% ( 25) 00:10:57.026 18318.496 - 18423.775: 98.1745% ( 22) 00:10:57.026 18423.775 - 18529.054: 98.3447% ( 22) 00:10:57.026 18529.054 - 18634.333: 98.5071% ( 21) 00:10:57.026 18634.333 - 18739.611: 98.6231% ( 15) 00:10:57.026 18739.611 - 18844.890: 98.7160% ( 12) 00:10:57.026 18844.890 - 18950.169: 98.7778% ( 8) 00:10:57.026 18950.169 - 19055.447: 98.8165% ( 5) 00:10:57.026 19055.447 - 19160.726: 98.8552% ( 5) 00:10:57.026 19160.726 - 19266.005: 98.9016% ( 6) 00:10:57.026 19266.005 - 19371.284: 98.9403% ( 5) 00:10:57.026 19371.284 - 19476.562: 98.9790% ( 5) 00:10:57.026 19476.562 - 19581.841: 99.0099% ( 4) 00:10:57.026 42111.486 - 42322.043: 99.0486% ( 5) 00:10:57.026 42322.043 - 42532.601: 99.0950% ( 6) 00:10:57.026 42532.601 - 42743.158: 99.1414% ( 6) 00:10:57.026 42743.158 - 42953.716: 99.1955% ( 7) 00:10:57.026 42953.716 - 43164.273: 99.2420% ( 6) 00:10:57.026 43164.273 - 43374.831: 99.2884% ( 6) 00:10:57.026 43374.831 - 43585.388: 99.3425% ( 7) 00:10:57.026 43585.388 - 43795.945: 99.3967% ( 7) 00:10:57.026 43795.945 - 44006.503: 99.4431% ( 6) 00:10:57.026 44006.503 - 44217.060: 99.4972% ( 7) 00:10:57.026 44217.060 - 44427.618: 99.5050% ( 1) 00:10:57.026 49270.439 - 49480.996: 99.5591% ( 7) 00:10:57.026 49480.996 - 49691.553: 99.6055% ( 6) 00:10:57.026 49691.553 - 49902.111: 99.6519% ( 6) 00:10:57.026 49902.111 - 50112.668: 99.6906% ( 5) 00:10:57.026 50112.668 - 50323.226: 99.7447% ( 7) 00:10:57.026 50323.226 - 50533.783: 99.7989% ( 7) 00:10:57.026 50533.783 - 50744.341: 99.8530% ( 7) 00:10:57.026 50744.341 - 50954.898: 99.9072% ( 7) 00:10:57.026 50954.898 - 51165.455: 99.9536% ( 6) 00:10:57.026 51165.455 - 51376.013: 100.0000% ( 6) 00:10:57.026 00:10:57.026 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:57.026 ============================================================================== 00:10:57.026 Range in us Cumulative IO count 00:10:57.026 8106.461 - 8159.100: 0.0541% ( 7) 00:10:57.026 8159.100 - 8211.740: 0.1547% ( 13) 00:10:57.026 8211.740 - 8264.379: 0.4718% ( 41) 00:10:57.026 8264.379 - 8317.018: 0.8663% ( 51) 00:10:57.026 8317.018 - 8369.658: 2.1272% ( 163) 00:10:57.026 8369.658 - 8422.297: 3.6742% ( 200) 00:10:57.026 8422.297 - 8474.937: 5.4301% ( 227) 00:10:57.026 8474.937 - 8527.576: 7.3020% ( 242) 00:10:57.026 8527.576 - 8580.215: 9.9938% ( 348) 00:10:57.026 8580.215 - 8632.855: 12.8636% ( 371) 00:10:57.026 8632.855 - 8685.494: 16.1200% ( 421) 00:10:57.026 8685.494 - 8738.133: 19.3611% ( 419) 00:10:57.026 8738.133 - 8790.773: 23.0894% ( 482) 00:10:57.026 8790.773 - 8843.412: 27.0111% ( 507) 00:10:57.026 8843.412 - 8896.051: 31.3892% ( 566) 00:10:57.026 8896.051 - 8948.691: 35.6358% ( 549) 00:10:57.026 8948.691 - 9001.330: 40.0758% ( 574) 00:10:57.026 9001.330 - 9053.969: 44.7788% ( 608) 00:10:57.026 9053.969 - 9106.609: 49.4199% ( 600) 00:10:57.026 9106.609 - 9159.248: 54.1151% ( 607) 00:10:57.026 9159.248 - 9211.888: 58.6866% ( 591) 00:10:57.026 9211.888 - 9264.527: 62.7707% ( 528) 00:10:57.026 9264.527 - 9317.166: 66.4913% ( 481) 00:10:57.026 9317.166 - 9369.806: 69.8639% ( 436) 00:10:57.026 9369.806 - 9422.445: 72.9425% ( 398) 00:10:57.026 9422.445 - 9475.084: 75.6498% ( 350) 00:10:57.026 9475.084 - 9527.724: 78.0786% ( 314) 00:10:57.026 9527.724 - 9580.363: 80.1361% ( 266) 00:10:57.026 9580.363 - 9633.002: 81.9694% ( 237) 00:10:57.026 9633.002 - 9685.642: 83.6479% ( 217) 00:10:57.026 9685.642 - 9738.281: 85.1021% ( 188) 00:10:57.026 9738.281 - 9790.920: 86.2624% ( 150) 00:10:57.026 9790.920 - 9843.560: 87.1519% ( 115) 00:10:57.026 9843.560 - 9896.199: 87.8636% ( 92) 00:10:57.026 9896.199 - 9948.839: 88.3973% ( 69) 00:10:57.026 9948.839 - 10001.478: 88.8537% ( 59) 00:10:57.026 10001.478 - 10054.117: 89.2249% ( 48) 00:10:57.026 10054.117 - 10106.757: 89.4106% ( 24) 00:10:57.026 10106.757 - 10159.396: 89.5421% ( 17) 00:10:57.026 10159.396 - 10212.035: 89.6194% ( 10) 00:10:57.026 10212.035 - 10264.675: 89.7355% ( 15) 00:10:57.026 10264.675 - 10317.314: 89.8360% ( 13) 00:10:57.026 10317.314 - 10369.953: 89.9211% ( 11) 00:10:57.026 10369.953 - 10422.593: 90.0217% ( 13) 00:10:57.026 10422.593 - 10475.232: 90.1067% ( 11) 00:10:57.026 10475.232 - 10527.871: 90.1841% ( 10) 00:10:57.026 10527.871 - 10580.511: 90.3156% ( 17) 00:10:57.026 10580.511 - 10633.150: 90.4084% ( 12) 00:10:57.026 10633.150 - 10685.790: 90.5090% ( 13) 00:10:57.026 10685.790 - 10738.429: 90.6018% ( 12) 00:10:57.026 10738.429 - 10791.068: 90.7178% ( 15) 00:10:57.026 10791.068 - 10843.708: 90.8338% ( 15) 00:10:57.026 10843.708 - 10896.347: 90.9267% ( 12) 00:10:57.026 10896.347 - 10948.986: 91.0118% ( 11) 00:10:57.026 10948.986 - 11001.626: 91.0968% ( 11) 00:10:57.026 11001.626 - 11054.265: 91.1819% ( 11) 00:10:57.026 11054.265 - 11106.904: 91.2593% ( 10) 00:10:57.026 11106.904 - 11159.544: 91.3212% ( 8) 00:10:57.026 11159.544 - 11212.183: 91.4062% ( 11) 00:10:57.026 11212.183 - 11264.822: 91.4759% ( 9) 00:10:57.026 11264.822 - 11317.462: 91.5377% ( 8) 00:10:57.026 11317.462 - 11370.101: 91.6074% ( 9) 00:10:57.026 11370.101 - 11422.741: 91.6615% ( 7) 00:10:57.026 11422.741 - 11475.380: 91.7389% ( 10) 00:10:57.026 11475.380 - 11528.019: 91.8085% ( 9) 00:10:57.026 11528.019 - 11580.659: 91.8626% ( 7) 00:10:57.026 11580.659 - 11633.298: 91.9245% ( 8) 00:10:57.026 11633.298 - 11685.937: 91.9709% ( 6) 00:10:57.026 11685.937 - 11738.577: 92.0637% ( 12) 00:10:57.026 11738.577 - 11791.216: 92.1488% ( 11) 00:10:57.026 11791.216 - 11843.855: 92.2107% ( 8) 00:10:57.026 11843.855 - 11896.495: 92.2803% ( 9) 00:10:57.026 11896.495 - 11949.134: 92.3345% ( 7) 00:10:57.026 11949.134 - 12001.773: 92.3886% ( 7) 00:10:57.026 12001.773 - 12054.413: 92.4350% ( 6) 00:10:57.026 12054.413 - 12107.052: 92.4814% ( 6) 00:10:57.026 12107.052 - 12159.692: 92.5433% ( 8) 00:10:57.026 12159.692 - 12212.331: 92.5743% ( 4) 00:10:57.026 12212.331 - 12264.970: 92.6052% ( 4) 00:10:57.026 12264.970 - 12317.610: 92.6361% ( 4) 00:10:57.026 12317.610 - 12370.249: 92.6671% ( 4) 00:10:57.026 12370.249 - 12422.888: 92.7212% ( 7) 00:10:57.026 12422.888 - 12475.528: 92.7676% ( 6) 00:10:57.026 12475.528 - 12528.167: 92.8295% ( 8) 00:10:57.026 12528.167 - 12580.806: 92.8991% ( 9) 00:10:57.026 12580.806 - 12633.446: 92.9610% ( 8) 00:10:57.026 12633.446 - 12686.085: 93.0152% ( 7) 00:10:57.026 12686.085 - 12738.724: 93.0770% ( 8) 00:10:57.026 12738.724 - 12791.364: 93.1312% ( 7) 00:10:57.026 12791.364 - 12844.003: 93.2008% ( 9) 00:10:57.026 12844.003 - 12896.643: 93.2627% ( 8) 00:10:57.026 12896.643 - 12949.282: 93.3323% ( 9) 00:10:57.026 12949.282 - 13001.921: 93.3942% ( 8) 00:10:57.026 13001.921 - 13054.561: 93.4638% ( 9) 00:10:57.026 13054.561 - 13107.200: 93.5334% ( 9) 00:10:57.026 13107.200 - 13159.839: 93.5798% ( 6) 00:10:57.026 13159.839 - 13212.479: 93.6417% ( 8) 00:10:57.026 13212.479 - 13265.118: 93.6959% ( 7) 00:10:57.026 13265.118 - 13317.757: 93.7500% ( 7) 00:10:57.026 13317.757 - 13370.397: 93.7964% ( 6) 00:10:57.026 13370.397 - 13423.036: 93.8351% ( 5) 00:10:57.026 13423.036 - 13475.676: 93.8738% ( 5) 00:10:57.026 13475.676 - 13580.954: 93.9666% ( 12) 00:10:57.026 13580.954 - 13686.233: 94.0671% ( 13) 00:10:57.026 13686.233 - 13791.512: 94.1754% ( 14) 00:10:57.026 13791.512 - 13896.790: 94.3301% ( 20) 00:10:57.026 13896.790 - 14002.069: 94.5545% ( 29) 00:10:57.026 14002.069 - 14107.348: 94.7788% ( 29) 00:10:57.026 14107.348 - 14212.627: 94.9954% ( 28) 00:10:57.026 14212.627 - 14317.905: 95.1733% ( 23) 00:10:57.026 14317.905 - 14423.184: 95.3357% ( 21) 00:10:57.026 14423.184 - 14528.463: 95.4749% ( 18) 00:10:57.026 14528.463 - 14633.741: 95.5987% ( 16) 00:10:57.026 14633.741 - 14739.020: 95.7302% ( 17) 00:10:57.026 14739.020 - 14844.299: 95.8617% ( 17) 00:10:57.026 14844.299 - 14949.578: 95.9313% ( 9) 00:10:57.026 14949.578 - 15054.856: 95.9700% ( 5) 00:10:57.026 15054.856 - 15160.135: 96.0087% ( 5) 00:10:57.026 15160.135 - 15265.414: 96.0396% ( 4) 00:10:57.026 16528.758 - 16634.037: 96.0628% ( 3) 00:10:57.026 16634.037 - 16739.316: 96.1015% ( 5) 00:10:57.026 16739.316 - 16844.594: 96.1402% ( 5) 00:10:57.026 16844.594 - 16949.873: 96.1866% ( 6) 00:10:57.026 16949.873 - 17055.152: 96.2330% ( 6) 00:10:57.026 17055.152 - 17160.431: 96.3181% ( 11) 00:10:57.026 17160.431 - 17265.709: 96.3954% ( 10) 00:10:57.026 17265.709 - 17370.988: 96.4728% ( 10) 00:10:57.026 17370.988 - 17476.267: 96.5965% ( 16) 00:10:57.026 17476.267 - 17581.545: 96.7203% ( 16) 00:10:57.026 17581.545 - 17686.824: 96.8905% ( 22) 00:10:57.026 17686.824 - 17792.103: 97.0684% ( 23) 00:10:57.026 17792.103 - 17897.382: 97.2695% ( 26) 00:10:57.026 17897.382 - 18002.660: 97.4474% ( 23) 00:10:57.026 18002.660 - 18107.939: 97.6330% ( 24) 00:10:57.026 18107.939 - 18213.218: 97.8264% ( 25) 00:10:57.026 18213.218 - 18318.496: 97.9579% ( 17) 00:10:57.026 18318.496 - 18423.775: 98.1049% ( 19) 00:10:57.026 18423.775 - 18529.054: 98.2364% ( 17) 00:10:57.026 18529.054 - 18634.333: 98.3679% ( 17) 00:10:57.026 18634.333 - 18739.611: 98.4916% ( 16) 00:10:57.027 18739.611 - 18844.890: 98.6231% ( 17) 00:10:57.027 18844.890 - 18950.169: 98.7314% ( 14) 00:10:57.027 18950.169 - 19055.447: 98.7701% ( 5) 00:10:57.027 19055.447 - 19160.726: 98.8165% ( 6) 00:10:57.027 19160.726 - 19266.005: 98.8552% ( 5) 00:10:57.027 19266.005 - 19371.284: 98.9016% ( 6) 00:10:57.027 19371.284 - 19476.562: 98.9325% ( 4) 00:10:57.027 19476.562 - 19581.841: 98.9712% ( 5) 00:10:57.027 19581.841 - 19687.120: 99.0099% ( 5) 00:10:57.027 39795.354 - 40005.912: 99.0563% ( 6) 00:10:57.027 40005.912 - 40216.469: 99.1027% ( 6) 00:10:57.027 40216.469 - 40427.027: 99.1569% ( 7) 00:10:57.027 40427.027 - 40637.584: 99.2033% ( 6) 00:10:57.027 40637.584 - 40848.141: 99.2497% ( 6) 00:10:57.027 40848.141 - 41058.699: 99.2961% ( 6) 00:10:57.027 41058.699 - 41269.256: 99.3502% ( 7) 00:10:57.027 41269.256 - 41479.814: 99.4044% ( 7) 00:10:57.027 41479.814 - 41690.371: 99.4508% ( 6) 00:10:57.027 41690.371 - 41900.929: 99.5050% ( 7) 00:10:57.027 46743.749 - 46954.307: 99.5127% ( 1) 00:10:57.027 46954.307 - 47164.864: 99.5668% ( 7) 00:10:57.027 47164.864 - 47375.422: 99.6210% ( 7) 00:10:57.027 47375.422 - 47585.979: 99.6674% ( 6) 00:10:57.027 47585.979 - 47796.537: 99.7215% ( 7) 00:10:57.027 47796.537 - 48007.094: 99.7679% ( 6) 00:10:57.027 48007.094 - 48217.651: 99.8144% ( 6) 00:10:57.027 48217.651 - 48428.209: 99.8685% ( 7) 00:10:57.027 48428.209 - 48638.766: 99.9226% ( 7) 00:10:57.027 48638.766 - 48849.324: 99.9768% ( 7) 00:10:57.027 48849.324 - 49059.881: 100.0000% ( 3) 00:10:57.027 00:10:57.027 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:57.027 ============================================================================== 00:10:57.027 Range in us Cumulative IO count 00:10:57.027 8106.461 - 8159.100: 0.0309% ( 4) 00:10:57.027 8159.100 - 8211.740: 0.1547% ( 16) 00:10:57.027 8211.740 - 8264.379: 0.4409% ( 37) 00:10:57.027 8264.379 - 8317.018: 1.0675% ( 81) 00:10:57.027 8317.018 - 8369.658: 2.1194% ( 136) 00:10:57.027 8369.658 - 8422.297: 3.6587% ( 199) 00:10:57.027 8422.297 - 8474.937: 5.4455% ( 231) 00:10:57.027 8474.937 - 8527.576: 7.4722% ( 262) 00:10:57.027 8527.576 - 8580.215: 9.9938% ( 326) 00:10:57.027 8580.215 - 8632.855: 13.0647% ( 397) 00:10:57.027 8632.855 - 8685.494: 16.2361% ( 410) 00:10:57.027 8685.494 - 8738.133: 19.7788% ( 458) 00:10:57.027 8738.133 - 8790.773: 23.3756% ( 465) 00:10:57.027 8790.773 - 8843.412: 27.3670% ( 516) 00:10:57.027 8843.412 - 8896.051: 31.6290% ( 551) 00:10:57.027 8896.051 - 8948.691: 35.9143% ( 554) 00:10:57.027 8948.691 - 9001.330: 40.4471% ( 586) 00:10:57.027 9001.330 - 9053.969: 45.1578% ( 609) 00:10:57.027 9053.969 - 9106.609: 49.7370% ( 592) 00:10:57.027 9106.609 - 9159.248: 54.3858% ( 601) 00:10:57.027 9159.248 - 9211.888: 58.7252% ( 561) 00:10:57.027 9211.888 - 9264.527: 62.6856% ( 512) 00:10:57.027 9264.527 - 9317.166: 66.4140% ( 482) 00:10:57.027 9317.166 - 9369.806: 69.8097% ( 439) 00:10:57.027 9369.806 - 9422.445: 72.9347% ( 404) 00:10:57.027 9422.445 - 9475.084: 75.7503% ( 364) 00:10:57.027 9475.084 - 9527.724: 78.1869% ( 315) 00:10:57.027 9527.724 - 9580.363: 80.2676% ( 269) 00:10:57.027 9580.363 - 9633.002: 82.1550% ( 244) 00:10:57.027 9633.002 - 9685.642: 83.7175% ( 202) 00:10:57.027 9685.642 - 9738.281: 85.1021% ( 179) 00:10:57.027 9738.281 - 9790.920: 86.2237% ( 145) 00:10:57.027 9790.920 - 9843.560: 87.1442% ( 119) 00:10:57.027 9843.560 - 9896.199: 87.7862% ( 83) 00:10:57.027 9896.199 - 9948.839: 88.3277% ( 70) 00:10:57.027 9948.839 - 10001.478: 88.7918% ( 60) 00:10:57.027 10001.478 - 10054.117: 89.1785% ( 50) 00:10:57.027 10054.117 - 10106.757: 89.3564% ( 23) 00:10:57.027 10106.757 - 10159.396: 89.5266% ( 22) 00:10:57.027 10159.396 - 10212.035: 89.6968% ( 22) 00:10:57.027 10212.035 - 10264.675: 89.7664% ( 9) 00:10:57.027 10264.675 - 10317.314: 89.8670% ( 13) 00:10:57.027 10317.314 - 10369.953: 89.9675% ( 13) 00:10:57.027 10369.953 - 10422.593: 90.0835% ( 15) 00:10:57.027 10422.593 - 10475.232: 90.2073% ( 16) 00:10:57.027 10475.232 - 10527.871: 90.3079% ( 13) 00:10:57.027 10527.871 - 10580.511: 90.4471% ( 18) 00:10:57.027 10580.511 - 10633.150: 90.5322% ( 11) 00:10:57.027 10633.150 - 10685.790: 90.6327% ( 13) 00:10:57.027 10685.790 - 10738.429: 90.7642% ( 17) 00:10:57.027 10738.429 - 10791.068: 90.8725% ( 14) 00:10:57.027 10791.068 - 10843.708: 90.9653% ( 12) 00:10:57.027 10843.708 - 10896.347: 91.0350% ( 9) 00:10:57.027 10896.347 - 10948.986: 91.1046% ( 9) 00:10:57.027 10948.986 - 11001.626: 91.1742% ( 9) 00:10:57.027 11001.626 - 11054.265: 91.2438% ( 9) 00:10:57.027 11054.265 - 11106.904: 91.3444% ( 13) 00:10:57.027 11106.904 - 11159.544: 91.4295% ( 11) 00:10:57.027 11159.544 - 11212.183: 91.5300% ( 13) 00:10:57.027 11212.183 - 11264.822: 91.6228% ( 12) 00:10:57.027 11264.822 - 11317.462: 91.7002% ( 10) 00:10:57.027 11317.462 - 11370.101: 91.7930% ( 12) 00:10:57.027 11370.101 - 11422.741: 91.8704% ( 10) 00:10:57.027 11422.741 - 11475.380: 91.9477% ( 10) 00:10:57.027 11475.380 - 11528.019: 92.0019% ( 7) 00:10:57.027 11528.019 - 11580.659: 92.0792% ( 10) 00:10:57.027 11580.659 - 11633.298: 92.1334% ( 7) 00:10:57.027 11633.298 - 11685.937: 92.1798% ( 6) 00:10:57.027 11685.937 - 11738.577: 92.2339% ( 7) 00:10:57.027 11738.577 - 11791.216: 92.2649% ( 4) 00:10:57.027 11791.216 - 11843.855: 92.3035% ( 5) 00:10:57.027 11843.855 - 11896.495: 92.3422% ( 5) 00:10:57.027 11896.495 - 11949.134: 92.3731% ( 4) 00:10:57.027 11949.134 - 12001.773: 92.3886% ( 2) 00:10:57.027 12001.773 - 12054.413: 92.4041% ( 2) 00:10:57.027 12054.413 - 12107.052: 92.4196% ( 2) 00:10:57.027 12107.052 - 12159.692: 92.4428% ( 3) 00:10:57.027 12159.692 - 12212.331: 92.4582% ( 2) 00:10:57.027 12212.331 - 12264.970: 92.4814% ( 3) 00:10:57.027 12264.970 - 12317.610: 92.5278% ( 6) 00:10:57.027 12317.610 - 12370.249: 92.5743% ( 6) 00:10:57.027 12370.249 - 12422.888: 92.6052% ( 4) 00:10:57.027 12422.888 - 12475.528: 92.6284% ( 3) 00:10:57.027 12475.528 - 12528.167: 92.6748% ( 6) 00:10:57.027 12528.167 - 12580.806: 92.7212% ( 6) 00:10:57.027 12580.806 - 12633.446: 92.7599% ( 5) 00:10:57.027 12633.446 - 12686.085: 92.7986% ( 5) 00:10:57.027 12686.085 - 12738.724: 92.8450% ( 6) 00:10:57.027 12738.724 - 12791.364: 92.8682% ( 3) 00:10:57.027 12791.364 - 12844.003: 92.8991% ( 4) 00:10:57.027 12844.003 - 12896.643: 92.9301% ( 4) 00:10:57.027 12896.643 - 12949.282: 92.9688% ( 5) 00:10:57.027 12949.282 - 13001.921: 93.0074% ( 5) 00:10:57.027 13001.921 - 13054.561: 93.0616% ( 7) 00:10:57.027 13054.561 - 13107.200: 93.1157% ( 7) 00:10:57.027 13107.200 - 13159.839: 93.1699% ( 7) 00:10:57.027 13159.839 - 13212.479: 93.2163% ( 6) 00:10:57.027 13212.479 - 13265.118: 93.2782% ( 8) 00:10:57.027 13265.118 - 13317.757: 93.3555% ( 10) 00:10:57.027 13317.757 - 13370.397: 93.4638% ( 14) 00:10:57.027 13370.397 - 13423.036: 93.5489% ( 11) 00:10:57.027 13423.036 - 13475.676: 93.6572% ( 14) 00:10:57.027 13475.676 - 13580.954: 93.8583% ( 26) 00:10:57.027 13580.954 - 13686.233: 94.0671% ( 27) 00:10:57.027 13686.233 - 13791.512: 94.2450% ( 23) 00:10:57.027 13791.512 - 13896.790: 94.4230% ( 23) 00:10:57.027 13896.790 - 14002.069: 94.5699% ( 19) 00:10:57.027 14002.069 - 14107.348: 94.6937% ( 16) 00:10:57.027 14107.348 - 14212.627: 94.8561% ( 21) 00:10:57.027 14212.627 - 14317.905: 94.9876% ( 17) 00:10:57.027 14317.905 - 14423.184: 95.0804% ( 12) 00:10:57.027 14423.184 - 14528.463: 95.1655% ( 11) 00:10:57.027 14528.463 - 14633.741: 95.2506% ( 11) 00:10:57.027 14633.741 - 14739.020: 95.3202% ( 9) 00:10:57.027 14739.020 - 14844.299: 95.4131% ( 12) 00:10:57.027 14844.299 - 14949.578: 95.5291% ( 15) 00:10:57.027 14949.578 - 15054.856: 95.6219% ( 12) 00:10:57.027 15054.856 - 15160.135: 95.7225% ( 13) 00:10:57.027 15160.135 - 15265.414: 95.8230% ( 13) 00:10:57.027 15265.414 - 15370.692: 95.9004% ( 10) 00:10:57.027 15370.692 - 15475.971: 95.9545% ( 7) 00:10:57.027 15475.971 - 15581.250: 96.0087% ( 7) 00:10:57.027 15581.250 - 15686.529: 96.0396% ( 4) 00:10:57.027 15897.086 - 16002.365: 96.0705% ( 4) 00:10:57.027 16002.365 - 16107.643: 96.1170% ( 6) 00:10:57.027 16107.643 - 16212.922: 96.1556% ( 5) 00:10:57.027 16212.922 - 16318.201: 96.2020% ( 6) 00:10:57.027 16318.201 - 16423.480: 96.2407% ( 5) 00:10:57.027 16423.480 - 16528.758: 96.3335% ( 12) 00:10:57.027 16528.758 - 16634.037: 96.4032% ( 9) 00:10:57.028 16634.037 - 16739.316: 96.4882% ( 11) 00:10:57.028 16739.316 - 16844.594: 96.5656% ( 10) 00:10:57.028 16844.594 - 16949.873: 96.6507% ( 11) 00:10:57.028 16949.873 - 17055.152: 96.7435% ( 12) 00:10:57.028 17055.152 - 17160.431: 96.8518% ( 14) 00:10:57.028 17160.431 - 17265.709: 96.9524% ( 13) 00:10:57.028 17265.709 - 17370.988: 97.0374% ( 11) 00:10:57.028 17370.988 - 17476.267: 97.1148% ( 10) 00:10:57.028 17476.267 - 17581.545: 97.1999% ( 11) 00:10:57.028 17581.545 - 17686.824: 97.2618% ( 8) 00:10:57.028 17686.824 - 17792.103: 97.3082% ( 6) 00:10:57.028 17792.103 - 17897.382: 97.3391% ( 4) 00:10:57.028 17897.382 - 18002.660: 97.3855% ( 6) 00:10:57.028 18002.660 - 18107.939: 97.4242% ( 5) 00:10:57.028 18107.939 - 18213.218: 97.4706% ( 6) 00:10:57.028 18213.218 - 18318.496: 97.5944% ( 16) 00:10:57.028 18318.496 - 18423.775: 97.7181% ( 16) 00:10:57.028 18423.775 - 18529.054: 97.8187% ( 13) 00:10:57.028 18529.054 - 18634.333: 97.9115% ( 12) 00:10:57.028 18634.333 - 18739.611: 98.0121% ( 13) 00:10:57.028 18739.611 - 18844.890: 98.1204% ( 14) 00:10:57.028 18844.890 - 18950.169: 98.2209% ( 13) 00:10:57.028 18950.169 - 19055.447: 98.3137% ( 12) 00:10:57.028 19055.447 - 19160.726: 98.4607% ( 19) 00:10:57.028 19160.726 - 19266.005: 98.5767% ( 15) 00:10:57.028 19266.005 - 19371.284: 98.6386% ( 8) 00:10:57.028 19371.284 - 19476.562: 98.6850% ( 6) 00:10:57.028 19476.562 - 19581.841: 98.7160% ( 4) 00:10:57.028 19581.841 - 19687.120: 98.7469% ( 4) 00:10:57.028 19687.120 - 19792.398: 98.7778% ( 4) 00:10:57.028 19792.398 - 19897.677: 98.8243% ( 6) 00:10:57.028 19897.677 - 20002.956: 98.8629% ( 5) 00:10:57.028 20002.956 - 20108.235: 98.9016% ( 5) 00:10:57.028 20108.235 - 20213.513: 98.9480% ( 6) 00:10:57.028 20213.513 - 20318.792: 98.9867% ( 5) 00:10:57.028 20318.792 - 20424.071: 99.0099% ( 3) 00:10:57.028 37058.108 - 37268.665: 99.0331% ( 3) 00:10:57.028 37268.665 - 37479.222: 99.0795% ( 6) 00:10:57.028 37479.222 - 37689.780: 99.1337% ( 7) 00:10:57.028 37689.780 - 37900.337: 99.1801% ( 6) 00:10:57.028 37900.337 - 38110.895: 99.2265% ( 6) 00:10:57.028 38110.895 - 38321.452: 99.2729% ( 6) 00:10:57.028 38321.452 - 38532.010: 99.3038% ( 4) 00:10:57.028 38532.010 - 38742.567: 99.3580% ( 7) 00:10:57.028 38742.567 - 38953.124: 99.4044% ( 6) 00:10:57.028 38953.124 - 39163.682: 99.4585% ( 7) 00:10:57.028 39163.682 - 39374.239: 99.5050% ( 6) 00:10:57.028 44427.618 - 44638.175: 99.5436% ( 5) 00:10:57.028 44638.175 - 44848.733: 99.5978% ( 7) 00:10:57.028 44848.733 - 45059.290: 99.6364% ( 5) 00:10:57.028 45059.290 - 45269.847: 99.6906% ( 7) 00:10:57.028 45269.847 - 45480.405: 99.7447% ( 7) 00:10:57.028 45480.405 - 45690.962: 99.7912% ( 6) 00:10:57.028 45690.962 - 45901.520: 99.8376% ( 6) 00:10:57.028 45901.520 - 46112.077: 99.8917% ( 7) 00:10:57.028 46112.077 - 46322.635: 99.9381% ( 6) 00:10:57.028 46322.635 - 46533.192: 99.9923% ( 7) 00:10:57.028 46533.192 - 46743.749: 100.0000% ( 1) 00:10:57.028 00:10:57.028 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:57.028 ============================================================================== 00:10:57.028 Range in us Cumulative IO count 00:10:57.028 8053.822 - 8106.461: 0.0077% ( 1) 00:10:57.028 8106.461 - 8159.100: 0.0462% ( 5) 00:10:57.028 8159.100 - 8211.740: 0.1385% ( 12) 00:10:57.028 8211.740 - 8264.379: 0.4695% ( 43) 00:10:57.028 8264.379 - 8317.018: 1.1315% ( 86) 00:10:57.028 8317.018 - 8369.658: 2.1244% ( 129) 00:10:57.028 8369.658 - 8422.297: 3.5099% ( 180) 00:10:57.028 8422.297 - 8474.937: 5.2879% ( 231) 00:10:57.028 8474.937 - 8527.576: 7.3969% ( 274) 00:10:57.028 8527.576 - 8580.215: 9.9831% ( 336) 00:10:57.028 8580.215 - 8632.855: 12.9926% ( 391) 00:10:57.028 8632.855 - 8685.494: 16.0791% ( 401) 00:10:57.028 8685.494 - 8738.133: 19.5120% ( 446) 00:10:57.028 8738.133 - 8790.773: 23.1989% ( 479) 00:10:57.028 8790.773 - 8843.412: 27.1321% ( 511) 00:10:57.028 8843.412 - 8896.051: 31.4963% ( 567) 00:10:57.028 8896.051 - 8948.691: 35.7297% ( 550) 00:10:57.028 8948.691 - 9001.330: 40.1247% ( 571) 00:10:57.028 9001.330 - 9053.969: 44.8353% ( 612) 00:10:57.028 9053.969 - 9106.609: 49.4920% ( 605) 00:10:57.028 9106.609 - 9159.248: 54.1410% ( 604) 00:10:57.028 9159.248 - 9211.888: 58.6900% ( 591) 00:10:57.028 9211.888 - 9264.527: 62.8618% ( 542) 00:10:57.028 9264.527 - 9317.166: 66.4948% ( 472) 00:10:57.028 9317.166 - 9369.806: 70.0046% ( 456) 00:10:57.028 9369.806 - 9422.445: 73.0526% ( 396) 00:10:57.028 9422.445 - 9475.084: 75.6081% ( 332) 00:10:57.028 9475.084 - 9527.724: 78.0095% ( 312) 00:10:57.028 9527.724 - 9580.363: 80.1339% ( 276) 00:10:57.028 9580.363 - 9633.002: 81.9119% ( 231) 00:10:57.028 9633.002 - 9685.642: 83.5129% ( 208) 00:10:57.028 9685.642 - 9738.281: 84.8445% ( 173) 00:10:57.028 9738.281 - 9790.920: 85.8682% ( 133) 00:10:57.028 9790.920 - 9843.560: 86.7149% ( 110) 00:10:57.028 9843.560 - 9896.199: 87.4538% ( 96) 00:10:57.028 9896.199 - 9948.839: 87.9926% ( 70) 00:10:57.028 9948.839 - 10001.478: 88.4621% ( 61) 00:10:57.028 10001.478 - 10054.117: 88.8239% ( 47) 00:10:57.028 10054.117 - 10106.757: 89.0548% ( 30) 00:10:57.028 10106.757 - 10159.396: 89.2087% ( 20) 00:10:57.028 10159.396 - 10212.035: 89.3473% ( 18) 00:10:57.028 10212.035 - 10264.675: 89.5089% ( 21) 00:10:57.028 10264.675 - 10317.314: 89.6475% ( 18) 00:10:57.028 10317.314 - 10369.953: 89.7860% ( 18) 00:10:57.028 10369.953 - 10422.593: 89.9323% ( 19) 00:10:57.028 10422.593 - 10475.232: 90.0785% ( 19) 00:10:57.028 10475.232 - 10527.871: 90.2401% ( 21) 00:10:57.028 10527.871 - 10580.511: 90.3479% ( 14) 00:10:57.028 10580.511 - 10633.150: 90.4865% ( 18) 00:10:57.028 10633.150 - 10685.790: 90.6096% ( 16) 00:10:57.028 10685.790 - 10738.429: 90.7020% ( 12) 00:10:57.028 10738.429 - 10791.068: 90.8097% ( 14) 00:10:57.028 10791.068 - 10843.708: 90.9021% ( 12) 00:10:57.028 10843.708 - 10896.347: 91.0099% ( 14) 00:10:57.028 10896.347 - 10948.986: 91.1253% ( 15) 00:10:57.028 10948.986 - 11001.626: 91.2485% ( 16) 00:10:57.028 11001.626 - 11054.265: 91.3639% ( 15) 00:10:57.028 11054.265 - 11106.904: 91.4717% ( 14) 00:10:57.028 11106.904 - 11159.544: 91.5717% ( 13) 00:10:57.028 11159.544 - 11212.183: 91.6564% ( 11) 00:10:57.028 11212.183 - 11264.822: 91.7488% ( 12) 00:10:57.028 11264.822 - 11317.462: 91.8103% ( 8) 00:10:57.028 11317.462 - 11370.101: 91.8719% ( 8) 00:10:57.028 11370.101 - 11422.741: 91.9412% ( 9) 00:10:57.028 11422.741 - 11475.380: 92.0028% ( 8) 00:10:57.028 11475.380 - 11528.019: 92.0951% ( 12) 00:10:57.028 11528.019 - 11580.659: 92.1413% ( 6) 00:10:57.028 11580.659 - 11633.298: 92.1952% ( 7) 00:10:57.028 11633.298 - 11685.937: 92.2414% ( 6) 00:10:57.028 11685.937 - 11738.577: 92.2953% ( 7) 00:10:57.028 11738.577 - 11791.216: 92.3337% ( 5) 00:10:57.028 11791.216 - 11843.855: 92.3568% ( 3) 00:10:57.028 11843.855 - 11896.495: 92.3953% ( 5) 00:10:57.028 11896.495 - 11949.134: 92.4261% ( 4) 00:10:57.028 11949.134 - 12001.773: 92.4569% ( 4) 00:10:57.028 12001.773 - 12054.413: 92.4800% ( 3) 00:10:57.028 12054.413 - 12107.052: 92.5031% ( 3) 00:10:57.028 12107.052 - 12159.692: 92.5185% ( 2) 00:10:57.028 12159.692 - 12212.331: 92.5339% ( 2) 00:10:57.028 12212.331 - 12264.970: 92.5493% ( 2) 00:10:57.028 12264.970 - 12317.610: 92.5954% ( 6) 00:10:57.028 12317.610 - 12370.249: 92.6262% ( 4) 00:10:57.028 12370.249 - 12422.888: 92.6724% ( 6) 00:10:57.028 12422.888 - 12475.528: 92.6955% ( 3) 00:10:57.028 12475.528 - 12528.167: 92.7109% ( 2) 00:10:57.028 12528.167 - 12580.806: 92.7725% ( 8) 00:10:57.028 12580.806 - 12633.446: 92.7956% ( 3) 00:10:57.028 12633.446 - 12686.085: 92.8494% ( 7) 00:10:57.028 12686.085 - 12738.724: 92.8956% ( 6) 00:10:57.028 12738.724 - 12791.364: 92.9649% ( 9) 00:10:57.028 12791.364 - 12844.003: 93.0265% ( 8) 00:10:57.028 12844.003 - 12896.643: 93.0958% ( 9) 00:10:57.028 12896.643 - 12949.282: 93.1496% ( 7) 00:10:57.028 12949.282 - 13001.921: 93.2112% ( 8) 00:10:57.028 13001.921 - 13054.561: 93.2882% ( 10) 00:10:57.028 13054.561 - 13107.200: 93.3651% ( 10) 00:10:57.028 13107.200 - 13159.839: 93.4421% ( 10) 00:10:57.028 13159.839 - 13212.479: 93.5268% ( 11) 00:10:57.028 13212.479 - 13265.118: 93.5884% ( 8) 00:10:57.028 13265.118 - 13317.757: 93.6576% ( 9) 00:10:57.028 13317.757 - 13370.397: 93.7346% ( 10) 00:10:57.028 13370.397 - 13423.036: 93.8116% ( 10) 00:10:57.028 13423.036 - 13475.676: 93.8808% ( 9) 00:10:57.028 13475.676 - 13580.954: 94.0348% ( 20) 00:10:57.028 13580.954 - 13686.233: 94.1656% ( 17) 00:10:57.028 13686.233 - 13791.512: 94.2734% ( 14) 00:10:57.028 13791.512 - 13896.790: 94.3581% ( 11) 00:10:57.028 13896.790 - 14002.069: 94.4119% ( 7) 00:10:57.028 14002.069 - 14107.348: 94.4812% ( 9) 00:10:57.028 14107.348 - 14212.627: 94.5505% ( 9) 00:10:57.028 14212.627 - 14317.905: 94.5813% ( 4) 00:10:57.028 14423.184 - 14528.463: 94.6890% ( 14) 00:10:57.028 14528.463 - 14633.741: 94.7275% ( 5) 00:10:57.028 14633.741 - 14739.020: 94.7891% ( 8) 00:10:57.028 14739.020 - 14844.299: 94.8507% ( 8) 00:10:57.028 14844.299 - 14949.578: 94.8892% ( 5) 00:10:57.028 14949.578 - 15054.856: 94.9661% ( 10) 00:10:57.028 15054.856 - 15160.135: 95.1047% ( 18) 00:10:57.028 15160.135 - 15265.414: 95.2278% ( 16) 00:10:57.028 15265.414 - 15370.692: 95.3356% ( 14) 00:10:57.028 15370.692 - 15475.971: 95.4280% ( 12) 00:10:57.028 15475.971 - 15581.250: 95.5203% ( 12) 00:10:57.028 15581.250 - 15686.529: 95.6512% ( 17) 00:10:57.028 15686.529 - 15791.807: 95.8128% ( 21) 00:10:57.028 15791.807 - 15897.086: 95.9821% ( 22) 00:10:57.029 15897.086 - 16002.365: 96.1746% ( 25) 00:10:57.029 16002.365 - 16107.643: 96.3362% ( 21) 00:10:57.029 16107.643 - 16212.922: 96.4901% ( 20) 00:10:57.029 16212.922 - 16318.201: 96.6364% ( 19) 00:10:57.029 16318.201 - 16423.480: 96.7518% ( 15) 00:10:57.029 16423.480 - 16528.758: 96.8750% ( 16) 00:10:57.029 16528.758 - 16634.037: 97.0135% ( 18) 00:10:57.029 16634.037 - 16739.316: 97.0982% ( 11) 00:10:57.029 16739.316 - 16844.594: 97.1906% ( 12) 00:10:57.029 16844.594 - 16949.873: 97.2368% ( 6) 00:10:57.029 16949.873 - 17055.152: 97.2752% ( 5) 00:10:57.029 17055.152 - 17160.431: 97.3214% ( 6) 00:10:57.029 17160.431 - 17265.709: 97.3599% ( 5) 00:10:57.029 17265.709 - 17370.988: 97.3984% ( 5) 00:10:57.029 17370.988 - 17476.267: 97.4446% ( 6) 00:10:57.029 17476.267 - 17581.545: 97.4754% ( 4) 00:10:57.029 17581.545 - 17686.824: 97.5139% ( 5) 00:10:57.029 17686.824 - 17792.103: 97.5369% ( 3) 00:10:57.029 18634.333 - 18739.611: 97.5523% ( 2) 00:10:57.029 18739.611 - 18844.890: 97.6370% ( 11) 00:10:57.029 18844.890 - 18950.169: 97.7294% ( 12) 00:10:57.029 18950.169 - 19055.447: 97.8371% ( 14) 00:10:57.029 19055.447 - 19160.726: 97.9372% ( 13) 00:10:57.029 19160.726 - 19266.005: 98.0296% ( 12) 00:10:57.029 19266.005 - 19371.284: 98.1373% ( 14) 00:10:57.029 19371.284 - 19476.562: 98.2297% ( 12) 00:10:57.029 19476.562 - 19581.841: 98.3220% ( 12) 00:10:57.029 19581.841 - 19687.120: 98.4683% ( 19) 00:10:57.029 19687.120 - 19792.398: 98.5991% ( 17) 00:10:57.029 19792.398 - 19897.677: 98.6453% ( 6) 00:10:57.029 19897.677 - 20002.956: 98.6838% ( 5) 00:10:57.029 20002.956 - 20108.235: 98.7223% ( 5) 00:10:57.029 20108.235 - 20213.513: 98.7685% ( 6) 00:10:57.029 20213.513 - 20318.792: 98.8070% ( 5) 00:10:57.029 20318.792 - 20424.071: 98.8377% ( 4) 00:10:57.029 20424.071 - 20529.349: 98.8762% ( 5) 00:10:57.029 20529.349 - 20634.628: 98.9224% ( 6) 00:10:57.029 20634.628 - 20739.907: 98.9609% ( 5) 00:10:57.029 20739.907 - 20845.186: 99.0071% ( 6) 00:10:57.029 20845.186 - 20950.464: 99.0148% ( 1) 00:10:57.029 29478.040 - 29688.598: 99.0456% ( 4) 00:10:57.029 29688.598 - 29899.155: 99.0994% ( 7) 00:10:57.029 29899.155 - 30109.712: 99.1456% ( 6) 00:10:57.029 30109.712 - 30320.270: 99.1995% ( 7) 00:10:57.029 30320.270 - 30530.827: 99.2457% ( 6) 00:10:57.029 30530.827 - 30741.385: 99.2996% ( 7) 00:10:57.029 30741.385 - 30951.942: 99.3458% ( 6) 00:10:57.029 30951.942 - 31162.500: 99.3996% ( 7) 00:10:57.029 31162.500 - 31373.057: 99.4458% ( 6) 00:10:57.029 31373.057 - 31583.614: 99.4997% ( 7) 00:10:57.029 31583.614 - 31794.172: 99.5074% ( 1) 00:10:57.029 36215.878 - 36426.435: 99.5305% ( 3) 00:10:57.029 36426.435 - 36636.993: 99.5844% ( 7) 00:10:57.029 36636.993 - 36847.550: 99.6382% ( 7) 00:10:57.029 36847.550 - 37058.108: 99.6844% ( 6) 00:10:57.029 37058.108 - 37268.665: 99.7306% ( 6) 00:10:57.029 37268.665 - 37479.222: 99.7845% ( 7) 00:10:57.029 37479.222 - 37689.780: 99.8307% ( 6) 00:10:57.029 37689.780 - 37900.337: 99.8768% ( 6) 00:10:57.029 37900.337 - 38110.895: 99.9307% ( 7) 00:10:57.029 38110.895 - 38321.452: 99.9769% ( 6) 00:10:57.029 38321.452 - 38532.010: 100.0000% ( 3) 00:10:57.029 00:10:57.029 08:23:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:58.440 Initializing NVMe Controllers 00:10:58.440 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:58.440 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:58.440 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:58.440 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:58.440 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:58.440 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:58.440 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:58.440 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:58.440 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:58.440 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:58.440 Initialization complete. Launching workers. 00:10:58.440 ======================================================== 00:10:58.440 Latency(us) 00:10:58.440 Device Information : IOPS MiB/s Average min max 00:10:58.440 PCIE (0000:00:10.0) NSID 1 from core 0: 12321.47 144.39 10413.42 7978.60 45820.61 00:10:58.440 PCIE (0000:00:11.0) NSID 1 from core 0: 12321.47 144.39 10393.92 8151.20 43534.82 00:10:58.440 PCIE (0000:00:13.0) NSID 1 from core 0: 12321.47 144.39 10374.49 8191.25 42272.00 00:10:58.440 PCIE (0000:00:12.0) NSID 1 from core 0: 12321.47 144.39 10355.03 8214.98 40158.97 00:10:58.440 PCIE (0000:00:12.0) NSID 2 from core 0: 12321.47 144.39 10335.34 8261.62 38098.00 00:10:58.440 PCIE (0000:00:12.0) NSID 3 from core 0: 12321.47 144.39 10316.04 7928.62 35959.37 00:10:58.440 ======================================================== 00:10:58.440 Total : 73928.80 866.35 10364.71 7928.62 45820.61 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8527.576us 00:10:58.440 10.00000% : 9159.248us 00:10:58.440 25.00000% : 9422.445us 00:10:58.440 50.00000% : 9790.920us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10791.068us 00:10:58.440 95.00000% : 13212.479us 00:10:58.440 98.00000% : 20213.513us 00:10:58.440 99.00000% : 31794.172us 00:10:58.440 99.50000% : 43374.831us 00:10:58.440 99.90000% : 45269.847us 00:10:58.440 99.99000% : 45901.520us 00:10:58.440 99.99900% : 45901.520us 00:10:58.440 99.99990% : 45901.520us 00:10:58.440 99.99999% : 45901.520us 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8632.855us 00:10:58.440 10.00000% : 9211.888us 00:10:58.440 25.00000% : 9475.084us 00:10:58.440 50.00000% : 9790.920us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10685.790us 00:10:58.440 95.00000% : 13317.757us 00:10:58.440 98.00000% : 19897.677us 00:10:58.440 99.00000% : 32636.402us 00:10:58.440 99.50000% : 41479.814us 00:10:58.440 99.90000% : 43164.273us 00:10:58.440 99.99000% : 43585.388us 00:10:58.440 99.99900% : 43585.388us 00:10:58.440 99.99990% : 43585.388us 00:10:58.440 99.99999% : 43585.388us 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8685.494us 00:10:58.440 10.00000% : 9211.888us 00:10:58.440 25.00000% : 9475.084us 00:10:58.440 50.00000% : 9843.560us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10685.790us 00:10:58.440 95.00000% : 12949.282us 00:10:58.440 98.00000% : 18844.890us 00:10:58.440 99.00000% : 31794.172us 00:10:58.440 99.50000% : 40216.469us 00:10:58.440 99.90000% : 41900.929us 00:10:58.440 99.99000% : 42322.043us 00:10:58.440 99.99900% : 42322.043us 00:10:58.440 99.99990% : 42322.043us 00:10:58.440 99.99999% : 42322.043us 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8738.133us 00:10:58.440 10.00000% : 9211.888us 00:10:58.440 25.00000% : 9475.084us 00:10:58.440 50.00000% : 9790.920us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10738.429us 00:10:58.440 95.00000% : 13107.200us 00:10:58.440 98.00000% : 19476.562us 00:10:58.440 99.00000% : 30530.827us 00:10:58.440 99.50000% : 36847.550us 00:10:58.440 99.90000% : 39795.354us 00:10:58.440 99.99000% : 40216.469us 00:10:58.440 99.99900% : 40216.469us 00:10:58.440 99.99990% : 40216.469us 00:10:58.440 99.99999% : 40216.469us 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8685.494us 00:10:58.440 10.00000% : 9159.248us 00:10:58.440 25.00000% : 9422.445us 00:10:58.440 50.00000% : 9843.560us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10791.068us 00:10:58.440 95.00000% : 12949.282us 00:10:58.440 98.00000% : 20634.628us 00:10:58.440 99.00000% : 27372.466us 00:10:58.440 99.50000% : 36005.320us 00:10:58.440 99.90000% : 37689.780us 00:10:58.440 99.99000% : 38110.895us 00:10:58.440 99.99900% : 38110.895us 00:10:58.440 99.99990% : 38110.895us 00:10:58.440 99.99999% : 38110.895us 00:10:58.440 00:10:58.440 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:58.440 ================================================================================= 00:10:58.440 1.00000% : 8580.215us 00:10:58.440 10.00000% : 9211.888us 00:10:58.440 25.00000% : 9475.084us 00:10:58.440 50.00000% : 9790.920us 00:10:58.440 75.00000% : 10212.035us 00:10:58.440 90.00000% : 10791.068us 00:10:58.440 95.00000% : 13791.512us 00:10:58.440 98.00000% : 21161.022us 00:10:58.440 99.00000% : 26319.679us 00:10:58.440 99.50000% : 32425.844us 00:10:58.440 99.90000% : 35584.206us 00:10:58.440 99.99000% : 36005.320us 00:10:58.440 99.99900% : 36005.320us 00:10:58.440 99.99990% : 36005.320us 00:10:58.440 99.99999% : 36005.320us 00:10:58.440 00:10:58.441 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:58.441 ============================================================================== 00:10:58.441 Range in us Cumulative IO count 00:10:58.441 7948.543 - 8001.182: 0.0081% ( 1) 00:10:58.441 8001.182 - 8053.822: 0.0162% ( 1) 00:10:58.441 8106.461 - 8159.100: 0.0405% ( 3) 00:10:58.441 8159.100 - 8211.740: 0.0648% ( 3) 00:10:58.441 8211.740 - 8264.379: 0.0972% ( 4) 00:10:58.441 8264.379 - 8317.018: 0.2024% ( 13) 00:10:58.441 8317.018 - 8369.658: 0.3400% ( 17) 00:10:58.441 8369.658 - 8422.297: 0.7610% ( 52) 00:10:58.441 8422.297 - 8474.937: 0.9796% ( 27) 00:10:58.441 8474.937 - 8527.576: 1.3196% ( 42) 00:10:58.441 8527.576 - 8580.215: 1.5868% ( 33) 00:10:58.441 8580.215 - 8632.855: 1.7892% ( 25) 00:10:58.441 8632.855 - 8685.494: 2.0078% ( 27) 00:10:58.441 8685.494 - 8738.133: 2.3235% ( 39) 00:10:58.441 8738.133 - 8790.773: 2.7688% ( 55) 00:10:58.441 8790.773 - 8843.412: 3.1817% ( 51) 00:10:58.441 8843.412 - 8896.051: 3.9184% ( 91) 00:10:58.441 8896.051 - 8948.691: 5.0680% ( 142) 00:10:58.441 8948.691 - 9001.330: 6.4848% ( 175) 00:10:58.441 9001.330 - 9053.969: 8.0554% ( 194) 00:10:58.441 9053.969 - 9106.609: 9.4074% ( 167) 00:10:58.441 9106.609 - 9159.248: 11.0427% ( 202) 00:10:58.441 9159.248 - 9211.888: 13.0181% ( 244) 00:10:58.441 9211.888 - 9264.527: 15.3174% ( 284) 00:10:58.441 9264.527 - 9317.166: 18.2723% ( 365) 00:10:58.441 9317.166 - 9369.806: 21.6240% ( 414) 00:10:58.441 9369.806 - 9422.445: 25.2348% ( 446) 00:10:58.441 9422.445 - 9475.084: 28.4326% ( 395) 00:10:58.441 9475.084 - 9527.724: 31.7519% ( 410) 00:10:58.441 9527.724 - 9580.363: 35.5408% ( 468) 00:10:58.441 9580.363 - 9633.002: 39.5078% ( 490) 00:10:58.441 9633.002 - 9685.642: 43.5719% ( 502) 00:10:58.441 9685.642 - 9738.281: 47.5874% ( 496) 00:10:58.441 9738.281 - 9790.920: 51.5382% ( 488) 00:10:58.441 9790.920 - 9843.560: 55.5376% ( 494) 00:10:58.441 9843.560 - 9896.199: 58.7678% ( 399) 00:10:58.441 9896.199 - 9948.839: 61.2775% ( 310) 00:10:58.441 9948.839 - 10001.478: 64.6940% ( 422) 00:10:58.441 10001.478 - 10054.117: 67.4790% ( 344) 00:10:58.441 10054.117 - 10106.757: 70.1911% ( 335) 00:10:58.441 10106.757 - 10159.396: 73.1299% ( 363) 00:10:58.441 10159.396 - 10212.035: 75.4453% ( 286) 00:10:58.441 10212.035 - 10264.675: 77.1049% ( 205) 00:10:58.441 10264.675 - 10317.314: 78.6431% ( 190) 00:10:58.441 10317.314 - 10369.953: 80.6833% ( 252) 00:10:58.441 10369.953 - 10422.593: 82.2296% ( 191) 00:10:58.441 10422.593 - 10475.232: 83.5006% ( 157) 00:10:58.441 10475.232 - 10527.871: 85.1441% ( 203) 00:10:58.441 10527.871 - 10580.511: 86.5690% ( 176) 00:10:58.441 10580.511 - 10633.150: 87.7267% ( 143) 00:10:58.441 10633.150 - 10685.790: 88.5848% ( 106) 00:10:58.441 10685.790 - 10738.429: 89.5240% ( 116) 00:10:58.441 10738.429 - 10791.068: 90.2445% ( 89) 00:10:58.441 10791.068 - 10843.708: 90.6736% ( 53) 00:10:58.441 10843.708 - 10896.347: 90.9974% ( 40) 00:10:58.441 10896.347 - 10948.986: 91.2889% ( 36) 00:10:58.441 10948.986 - 11001.626: 91.6127% ( 40) 00:10:58.441 11001.626 - 11054.265: 91.7746% ( 20) 00:10:58.441 11054.265 - 11106.904: 91.9365% ( 20) 00:10:58.441 11106.904 - 11159.544: 92.0742% ( 17) 00:10:58.441 11159.544 - 11212.183: 92.1470% ( 9) 00:10:58.441 11212.183 - 11264.822: 92.1956% ( 6) 00:10:58.441 11264.822 - 11317.462: 92.2280% ( 4) 00:10:58.441 11317.462 - 11370.101: 92.2361% ( 1) 00:10:58.441 11370.101 - 11422.741: 92.2442% ( 1) 00:10:58.441 11422.741 - 11475.380: 92.2523% ( 1) 00:10:58.441 11475.380 - 11528.019: 92.4547% ( 25) 00:10:58.441 11528.019 - 11580.659: 92.5437% ( 11) 00:10:58.441 11580.659 - 11633.298: 92.5842% ( 5) 00:10:58.441 11633.298 - 11685.937: 92.7380% ( 19) 00:10:58.441 11685.937 - 11738.577: 92.8999% ( 20) 00:10:58.441 11738.577 - 11791.216: 92.9566% ( 7) 00:10:58.441 11791.216 - 11843.855: 93.0052% ( 6) 00:10:58.441 11843.855 - 11896.495: 93.0457% ( 5) 00:10:58.441 11896.495 - 11949.134: 93.1023% ( 7) 00:10:58.441 11949.134 - 12001.773: 93.1671% ( 8) 00:10:58.441 12001.773 - 12054.413: 93.2885% ( 15) 00:10:58.441 12054.413 - 12107.052: 93.4181% ( 16) 00:10:58.441 12107.052 - 12159.692: 93.5395% ( 15) 00:10:58.441 12159.692 - 12212.331: 93.6448% ( 13) 00:10:58.441 12212.331 - 12264.970: 93.7176% ( 9) 00:10:58.441 12264.970 - 12317.610: 93.8067% ( 11) 00:10:58.441 12317.610 - 12370.249: 93.8552% ( 6) 00:10:58.441 12370.249 - 12422.888: 93.8876% ( 4) 00:10:58.441 12422.888 - 12475.528: 93.9362% ( 6) 00:10:58.441 12475.528 - 12528.167: 94.1224% ( 23) 00:10:58.441 12528.167 - 12580.806: 94.2762% ( 19) 00:10:58.441 12580.806 - 12633.446: 94.3653% ( 11) 00:10:58.441 12633.446 - 12686.085: 94.4381% ( 9) 00:10:58.441 12686.085 - 12738.724: 94.4462% ( 1) 00:10:58.441 12738.724 - 12791.364: 94.4624% ( 2) 00:10:58.441 12791.364 - 12844.003: 94.5272% ( 8) 00:10:58.441 12844.003 - 12896.643: 94.5758% ( 6) 00:10:58.441 12896.643 - 12949.282: 94.6324% ( 7) 00:10:58.441 12949.282 - 13001.921: 94.7053% ( 9) 00:10:58.441 13001.921 - 13054.561: 94.7782% ( 9) 00:10:58.441 13054.561 - 13107.200: 94.8996% ( 15) 00:10:58.441 13107.200 - 13159.839: 94.9968% ( 12) 00:10:58.441 13159.839 - 13212.479: 95.0210% ( 3) 00:10:58.441 13212.479 - 13265.118: 95.0615% ( 5) 00:10:58.441 13265.118 - 13317.757: 95.0858% ( 3) 00:10:58.441 13317.757 - 13370.397: 95.0939% ( 1) 00:10:58.441 13370.397 - 13423.036: 95.1101% ( 2) 00:10:58.441 13423.036 - 13475.676: 95.1263% ( 2) 00:10:58.441 13475.676 - 13580.954: 95.1425% ( 2) 00:10:58.441 13580.954 - 13686.233: 95.1587% ( 2) 00:10:58.441 13686.233 - 13791.512: 95.1749% ( 2) 00:10:58.441 13791.512 - 13896.790: 95.1830% ( 1) 00:10:58.441 13896.790 - 14002.069: 95.1992% ( 2) 00:10:58.441 14002.069 - 14107.348: 95.2315% ( 4) 00:10:58.441 14107.348 - 14212.627: 95.2720% ( 5) 00:10:58.441 14212.627 - 14317.905: 95.3773% ( 13) 00:10:58.441 14317.905 - 14423.184: 95.4420% ( 8) 00:10:58.441 14423.184 - 14528.463: 95.4987% ( 7) 00:10:58.441 14528.463 - 14633.741: 95.5311% ( 4) 00:10:58.441 14633.741 - 14739.020: 95.5554% ( 3) 00:10:58.441 14739.020 - 14844.299: 95.5878% ( 4) 00:10:58.441 14844.299 - 14949.578: 95.6201% ( 4) 00:10:58.441 14949.578 - 15054.856: 95.6606% ( 5) 00:10:58.441 15054.856 - 15160.135: 95.7011% ( 5) 00:10:58.441 15160.135 - 15265.414: 95.7416% ( 5) 00:10:58.441 15265.414 - 15370.692: 95.7740% ( 4) 00:10:58.441 15370.692 - 15475.971: 95.8225% ( 6) 00:10:58.441 15475.971 - 15581.250: 96.0087% ( 23) 00:10:58.441 15581.250 - 15686.529: 96.0411% ( 4) 00:10:58.441 15686.529 - 15791.807: 96.0897% ( 6) 00:10:58.441 15791.807 - 15897.086: 96.1626% ( 9) 00:10:58.441 15897.086 - 16002.365: 96.2678% ( 13) 00:10:58.441 16002.365 - 16107.643: 96.4459% ( 22) 00:10:58.441 16107.643 - 16212.922: 96.6240% ( 22) 00:10:58.441 16212.922 - 16318.201: 96.7374% ( 14) 00:10:58.441 16318.201 - 16423.480: 96.8669% ( 16) 00:10:58.441 16423.480 - 16528.758: 97.0531% ( 23) 00:10:58.441 16528.758 - 16634.037: 97.3041% ( 31) 00:10:58.441 16634.037 - 16739.316: 97.4093% ( 13) 00:10:58.441 16739.316 - 16844.594: 97.4984% ( 11) 00:10:58.441 16844.594 - 16949.873: 97.6036% ( 13) 00:10:58.441 16949.873 - 17055.152: 97.6927% ( 11) 00:10:58.441 17055.152 - 17160.431: 97.7736% ( 10) 00:10:58.441 17160.431 - 17265.709: 97.8465% ( 9) 00:10:58.441 17265.709 - 17370.988: 97.9113% ( 8) 00:10:58.441 17476.267 - 17581.545: 97.9275% ( 2) 00:10:58.441 19897.677 - 20002.956: 97.9437% ( 2) 00:10:58.441 20002.956 - 20108.235: 97.9760% ( 4) 00:10:58.441 20108.235 - 20213.513: 98.0084% ( 4) 00:10:58.441 20213.513 - 20318.792: 98.0408% ( 4) 00:10:58.441 20318.792 - 20424.071: 98.0732% ( 4) 00:10:58.441 20424.071 - 20529.349: 98.0975% ( 3) 00:10:58.441 20529.349 - 20634.628: 98.1380% ( 5) 00:10:58.441 20634.628 - 20739.907: 98.1703% ( 4) 00:10:58.441 20739.907 - 20845.186: 98.1946% ( 3) 00:10:58.441 20845.186 - 20950.464: 98.2270% ( 4) 00:10:58.441 20950.464 - 21055.743: 98.2594% ( 4) 00:10:58.441 21055.743 - 21161.022: 98.2918% ( 4) 00:10:58.441 21161.022 - 21266.300: 98.3161% ( 3) 00:10:58.441 21266.300 - 21371.579: 98.3484% ( 4) 00:10:58.441 21371.579 - 21476.858: 98.3889% ( 5) 00:10:58.441 21476.858 - 21582.137: 98.4213% ( 4) 00:10:58.441 21582.137 - 21687.415: 98.4456% ( 3) 00:10:58.441 22003.251 - 22108.530: 98.4537% ( 1) 00:10:58.441 22213.809 - 22319.088: 98.4618% ( 1) 00:10:58.441 22319.088 - 22424.366: 98.4861% ( 3) 00:10:58.441 22424.366 - 22529.645: 98.5589% ( 9) 00:10:58.441 22529.645 - 22634.924: 98.6561% ( 12) 00:10:58.441 22634.924 - 22740.202: 98.8018% ( 18) 00:10:58.441 22740.202 - 22845.481: 98.8342% ( 4) 00:10:58.441 22845.481 - 22950.760: 98.8747% ( 5) 00:10:58.441 22950.760 - 23056.039: 98.8990% ( 3) 00:10:58.441 23056.039 - 23161.317: 98.9233% ( 3) 00:10:58.441 23161.317 - 23266.596: 98.9637% ( 5) 00:10:58.441 31583.614 - 31794.172: 99.0204% ( 7) 00:10:58.441 31794.172 - 32004.729: 99.0609% ( 5) 00:10:58.441 32004.729 - 32215.287: 99.1095% ( 6) 00:10:58.441 32215.287 - 32425.844: 99.1661% ( 7) 00:10:58.441 32425.844 - 32636.402: 99.2066% ( 5) 00:10:58.441 32636.402 - 32846.959: 99.2552% ( 6) 00:10:58.441 32846.959 - 33057.516: 99.3038% ( 6) 00:10:58.441 33057.516 - 33268.074: 99.3442% ( 5) 00:10:58.441 33268.074 - 33478.631: 99.3847% ( 5) 00:10:58.442 33478.631 - 33689.189: 99.4333% ( 6) 00:10:58.442 33689.189 - 33899.746: 99.4819% ( 6) 00:10:58.442 43164.273 - 43374.831: 99.5142% ( 4) 00:10:58.442 43374.831 - 43585.388: 99.5547% ( 5) 00:10:58.442 43585.388 - 43795.945: 99.5952% ( 5) 00:10:58.442 43795.945 - 44006.503: 99.6438% ( 6) 00:10:58.442 44006.503 - 44217.060: 99.6762% ( 4) 00:10:58.442 44217.060 - 44427.618: 99.7328% ( 7) 00:10:58.442 44427.618 - 44638.175: 99.7814% ( 6) 00:10:58.442 44638.175 - 44848.733: 99.8219% ( 5) 00:10:58.442 44848.733 - 45059.290: 99.8543% ( 4) 00:10:58.442 45059.290 - 45269.847: 99.9028% ( 6) 00:10:58.442 45269.847 - 45480.405: 99.9271% ( 3) 00:10:58.442 45480.405 - 45690.962: 99.9757% ( 6) 00:10:58.442 45690.962 - 45901.520: 100.0000% ( 3) 00:10:58.442 00:10:58.442 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:58.442 ============================================================================== 00:10:58.442 Range in us Cumulative IO count 00:10:58.442 8106.461 - 8159.100: 0.0081% ( 1) 00:10:58.442 8159.100 - 8211.740: 0.0486% ( 5) 00:10:58.442 8211.740 - 8264.379: 0.0891% ( 5) 00:10:58.442 8264.379 - 8317.018: 0.1376% ( 6) 00:10:58.442 8317.018 - 8369.658: 0.2024% ( 8) 00:10:58.442 8369.658 - 8422.297: 0.4453% ( 30) 00:10:58.442 8422.297 - 8474.937: 0.5262% ( 10) 00:10:58.442 8474.937 - 8527.576: 0.6153% ( 11) 00:10:58.442 8527.576 - 8580.215: 0.9634% ( 43) 00:10:58.442 8580.215 - 8632.855: 1.0606% ( 12) 00:10:58.442 8632.855 - 8685.494: 1.1496% ( 11) 00:10:58.442 8685.494 - 8738.133: 1.2710% ( 15) 00:10:58.442 8738.133 - 8790.773: 1.5544% ( 35) 00:10:58.442 8790.773 - 8843.412: 1.9673% ( 51) 00:10:58.442 8843.412 - 8896.051: 2.3397% ( 46) 00:10:58.442 8896.051 - 8948.691: 3.1169% ( 96) 00:10:58.442 8948.691 - 9001.330: 3.8212% ( 87) 00:10:58.442 9001.330 - 9053.969: 5.0599% ( 153) 00:10:58.442 9053.969 - 9106.609: 7.1405% ( 257) 00:10:58.442 9106.609 - 9159.248: 8.9702% ( 226) 00:10:58.442 9159.248 - 9211.888: 10.8565% ( 233) 00:10:58.442 9211.888 - 9264.527: 13.4634% ( 322) 00:10:58.442 9264.527 - 9317.166: 16.1269% ( 329) 00:10:58.442 9317.166 - 9369.806: 19.3086% ( 393) 00:10:58.442 9369.806 - 9422.445: 22.5308% ( 398) 00:10:58.442 9422.445 - 9475.084: 26.1577% ( 448) 00:10:58.442 9475.084 - 9527.724: 30.6104% ( 550) 00:10:58.442 9527.724 - 9580.363: 35.4437% ( 597) 00:10:58.442 9580.363 - 9633.002: 39.8235% ( 541) 00:10:58.442 9633.002 - 9685.642: 44.0253% ( 519) 00:10:58.442 9685.642 - 9738.281: 48.1703% ( 512) 00:10:58.442 9738.281 - 9790.920: 52.2830% ( 508) 00:10:58.442 9790.920 - 9843.560: 55.8128% ( 436) 00:10:58.442 9843.560 - 9896.199: 59.3183% ( 433) 00:10:58.442 9896.199 - 9948.839: 62.4595% ( 388) 00:10:58.442 9948.839 - 10001.478: 65.6898% ( 399) 00:10:58.442 10001.478 - 10054.117: 68.5881% ( 358) 00:10:58.442 10054.117 - 10106.757: 71.2435% ( 328) 00:10:58.442 10106.757 - 10159.396: 74.0690% ( 349) 00:10:58.442 10159.396 - 10212.035: 76.7487% ( 331) 00:10:58.442 10212.035 - 10264.675: 78.7079% ( 242) 00:10:58.442 10264.675 - 10317.314: 80.7642% ( 254) 00:10:58.442 10317.314 - 10369.953: 82.3672% ( 198) 00:10:58.442 10369.953 - 10422.593: 84.5450% ( 269) 00:10:58.442 10422.593 - 10475.232: 86.0023% ( 180) 00:10:58.442 10475.232 - 10527.871: 87.3624% ( 168) 00:10:58.442 10527.871 - 10580.511: 88.4715% ( 137) 00:10:58.442 10580.511 - 10633.150: 89.3863% ( 113) 00:10:58.442 10633.150 - 10685.790: 90.3983% ( 125) 00:10:58.442 10685.790 - 10738.429: 90.9003% ( 62) 00:10:58.442 10738.429 - 10791.068: 91.3212% ( 52) 00:10:58.442 10791.068 - 10843.708: 91.5560% ( 29) 00:10:58.442 10843.708 - 10896.347: 91.6613% ( 13) 00:10:58.442 10896.347 - 10948.986: 91.6937% ( 4) 00:10:58.442 10948.986 - 11001.626: 91.7422% ( 6) 00:10:58.442 11001.626 - 11054.265: 91.7989% ( 7) 00:10:58.442 11054.265 - 11106.904: 91.8556% ( 7) 00:10:58.442 11106.904 - 11159.544: 91.9284% ( 9) 00:10:58.442 11159.544 - 11212.183: 92.0580% ( 16) 00:10:58.442 11212.183 - 11264.822: 92.1065% ( 6) 00:10:58.442 11264.822 - 11317.462: 92.1875% ( 10) 00:10:58.442 11317.462 - 11370.101: 92.2442% ( 7) 00:10:58.442 11370.101 - 11422.741: 92.3170% ( 9) 00:10:58.442 11422.741 - 11475.380: 92.3656% ( 6) 00:10:58.442 11475.380 - 11528.019: 92.4304% ( 8) 00:10:58.442 11528.019 - 11580.659: 92.6409% ( 26) 00:10:58.442 11580.659 - 11633.298: 92.7299% ( 11) 00:10:58.442 11633.298 - 11685.937: 92.8271% ( 12) 00:10:58.442 11685.937 - 11738.577: 92.9809% ( 19) 00:10:58.442 11738.577 - 11791.216: 93.0457% ( 8) 00:10:58.442 11791.216 - 11843.855: 93.1023% ( 7) 00:10:58.442 11843.855 - 11896.495: 93.1347% ( 4) 00:10:58.442 11896.495 - 11949.134: 93.1671% ( 4) 00:10:58.442 11949.134 - 12001.773: 93.2157% ( 6) 00:10:58.442 12001.773 - 12054.413: 93.2400% ( 3) 00:10:58.442 12054.413 - 12107.052: 93.2804% ( 5) 00:10:58.442 12107.052 - 12159.692: 93.3533% ( 9) 00:10:58.442 12159.692 - 12212.331: 93.4424% ( 11) 00:10:58.442 12212.331 - 12264.970: 93.5557% ( 14) 00:10:58.442 12264.970 - 12317.610: 93.6528% ( 12) 00:10:58.442 12317.610 - 12370.249: 93.7257% ( 9) 00:10:58.442 12370.249 - 12422.888: 93.8229% ( 12) 00:10:58.442 12422.888 - 12475.528: 93.8795% ( 7) 00:10:58.442 12475.528 - 12528.167: 94.0576% ( 22) 00:10:58.442 12528.167 - 12580.806: 94.1062% ( 6) 00:10:58.442 12580.806 - 12633.446: 94.1386% ( 4) 00:10:58.442 12633.446 - 12686.085: 94.1791% ( 5) 00:10:58.442 12686.085 - 12738.724: 94.2115% ( 4) 00:10:58.442 12738.724 - 12791.364: 94.2358% ( 3) 00:10:58.442 12791.364 - 12844.003: 94.2762% ( 5) 00:10:58.442 12844.003 - 12896.643: 94.3005% ( 3) 00:10:58.442 12896.643 - 12949.282: 94.3248% ( 3) 00:10:58.442 12949.282 - 13001.921: 94.3572% ( 4) 00:10:58.442 13001.921 - 13054.561: 94.3896% ( 4) 00:10:58.442 13054.561 - 13107.200: 94.4301% ( 5) 00:10:58.442 13107.200 - 13159.839: 94.4867% ( 7) 00:10:58.442 13159.839 - 13212.479: 94.6972% ( 26) 00:10:58.442 13212.479 - 13265.118: 94.7782% ( 10) 00:10:58.442 13265.118 - 13317.757: 95.0453% ( 33) 00:10:58.442 13317.757 - 13370.397: 95.2073% ( 20) 00:10:58.442 13370.397 - 13423.036: 95.2477% ( 5) 00:10:58.442 13423.036 - 13475.676: 95.2720% ( 3) 00:10:58.442 13475.676 - 13580.954: 95.3125% ( 5) 00:10:58.442 13580.954 - 13686.233: 95.3611% ( 6) 00:10:58.442 13686.233 - 13791.512: 95.4582% ( 12) 00:10:58.442 13791.512 - 13896.790: 95.5473% ( 11) 00:10:58.442 13896.790 - 14002.069: 95.6930% ( 18) 00:10:58.442 14002.069 - 14107.348: 95.7578% ( 8) 00:10:58.442 14107.348 - 14212.627: 95.9278% ( 21) 00:10:58.442 14212.627 - 14317.905: 96.0249% ( 12) 00:10:58.442 14317.905 - 14423.184: 96.0978% ( 9) 00:10:58.442 14423.184 - 14528.463: 96.1707% ( 9) 00:10:58.442 14528.463 - 14633.741: 96.2111% ( 5) 00:10:58.442 14633.741 - 14739.020: 96.2435% ( 4) 00:10:58.442 14739.020 - 14844.299: 96.2759% ( 4) 00:10:58.442 14844.299 - 14949.578: 96.3164% ( 5) 00:10:58.442 14949.578 - 15054.856: 96.3650% ( 6) 00:10:58.442 15054.856 - 15160.135: 96.3892% ( 3) 00:10:58.442 15160.135 - 15265.414: 96.4459% ( 7) 00:10:58.442 15265.414 - 15370.692: 96.5188% ( 9) 00:10:58.442 15370.692 - 15475.971: 96.5835% ( 8) 00:10:58.442 15475.971 - 15581.250: 96.6483% ( 8) 00:10:58.442 15581.250 - 15686.529: 96.7293% ( 10) 00:10:58.442 15686.529 - 15791.807: 96.7778% ( 6) 00:10:58.442 15791.807 - 15897.086: 96.8750% ( 12) 00:10:58.442 15897.086 - 16002.365: 97.0369% ( 20) 00:10:58.442 16002.365 - 16107.643: 97.1584% ( 15) 00:10:58.442 16107.643 - 16212.922: 97.2069% ( 6) 00:10:58.442 16212.922 - 16318.201: 97.2798% ( 9) 00:10:58.442 16318.201 - 16423.480: 97.3122% ( 4) 00:10:58.442 16423.480 - 16528.758: 97.3527% ( 5) 00:10:58.442 16528.758 - 16634.037: 97.4174% ( 8) 00:10:58.442 16739.316 - 16844.594: 97.4660% ( 6) 00:10:58.442 16844.594 - 16949.873: 97.5631% ( 12) 00:10:58.442 16949.873 - 17055.152: 97.7574% ( 24) 00:10:58.442 17055.152 - 17160.431: 97.8627% ( 13) 00:10:58.442 17160.431 - 17265.709: 97.9275% ( 8) 00:10:58.442 19687.120 - 19792.398: 97.9598% ( 4) 00:10:58.442 19792.398 - 19897.677: 98.0327% ( 9) 00:10:58.442 19897.677 - 20002.956: 98.1056% ( 9) 00:10:58.442 20002.956 - 20108.235: 98.1622% ( 7) 00:10:58.442 20108.235 - 20213.513: 98.1865% ( 3) 00:10:58.442 20213.513 - 20318.792: 98.2108% ( 3) 00:10:58.442 20318.792 - 20424.071: 98.2432% ( 4) 00:10:58.442 20424.071 - 20529.349: 98.2837% ( 5) 00:10:58.442 20529.349 - 20634.628: 98.3242% ( 5) 00:10:58.442 20634.628 - 20739.907: 98.3565% ( 4) 00:10:58.442 20739.907 - 20845.186: 98.3970% ( 5) 00:10:58.442 20845.186 - 20950.464: 98.4375% ( 5) 00:10:58.442 20950.464 - 21055.743: 98.4456% ( 1) 00:10:58.442 23266.596 - 23371.875: 98.4618% ( 2) 00:10:58.442 23371.875 - 23477.153: 98.5104% ( 6) 00:10:58.442 23477.153 - 23582.432: 98.5751% ( 8) 00:10:58.442 23582.432 - 23687.711: 98.6804% ( 13) 00:10:58.442 23687.711 - 23792.990: 98.8018% ( 15) 00:10:58.442 23792.990 - 23898.268: 98.8828% ( 10) 00:10:58.442 23898.268 - 24003.547: 98.9556% ( 9) 00:10:58.442 24003.547 - 24108.826: 98.9637% ( 1) 00:10:58.442 32215.287 - 32425.844: 98.9961% ( 4) 00:10:58.442 32425.844 - 32636.402: 99.0528% ( 7) 00:10:58.442 32636.402 - 32846.959: 99.1014% ( 6) 00:10:58.442 32846.959 - 33057.516: 99.1499% ( 6) 00:10:58.442 33057.516 - 33268.074: 99.2066% ( 7) 00:10:58.442 33268.074 - 33478.631: 99.2552% ( 6) 00:10:58.442 33478.631 - 33689.189: 99.3038% ( 6) 00:10:58.442 33689.189 - 33899.746: 99.3523% ( 6) 00:10:58.443 33899.746 - 34110.304: 99.4090% ( 7) 00:10:58.443 34110.304 - 34320.861: 99.4576% ( 6) 00:10:58.443 34320.861 - 34531.418: 99.4819% ( 3) 00:10:58.443 41058.699 - 41269.256: 99.4981% ( 2) 00:10:58.443 41269.256 - 41479.814: 99.5466% ( 6) 00:10:58.443 41479.814 - 41690.371: 99.5952% ( 6) 00:10:58.443 41690.371 - 41900.929: 99.6276% ( 4) 00:10:58.443 41900.929 - 42111.486: 99.6762% ( 6) 00:10:58.443 42111.486 - 42322.043: 99.7247% ( 6) 00:10:58.443 42322.043 - 42532.601: 99.7733% ( 6) 00:10:58.443 42532.601 - 42743.158: 99.8138% ( 5) 00:10:58.443 42743.158 - 42953.716: 99.8624% ( 6) 00:10:58.443 42953.716 - 43164.273: 99.9109% ( 6) 00:10:58.443 43164.273 - 43374.831: 99.9595% ( 6) 00:10:58.443 43374.831 - 43585.388: 100.0000% ( 5) 00:10:58.443 00:10:58.443 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:58.443 ============================================================================== 00:10:58.443 Range in us Cumulative IO count 00:10:58.443 8159.100 - 8211.740: 0.0162% ( 2) 00:10:58.443 8211.740 - 8264.379: 0.0567% ( 5) 00:10:58.443 8264.379 - 8317.018: 0.1538% ( 12) 00:10:58.443 8317.018 - 8369.658: 0.2510% ( 12) 00:10:58.443 8369.658 - 8422.297: 0.4534% ( 25) 00:10:58.443 8422.297 - 8474.937: 0.5586% ( 13) 00:10:58.443 8474.937 - 8527.576: 0.7529% ( 24) 00:10:58.443 8527.576 - 8580.215: 0.8744% ( 15) 00:10:58.443 8580.215 - 8632.855: 0.9877% ( 14) 00:10:58.443 8632.855 - 8685.494: 1.1091% ( 15) 00:10:58.443 8685.494 - 8738.133: 1.3439% ( 29) 00:10:58.443 8738.133 - 8790.773: 1.6597% ( 39) 00:10:58.443 8790.773 - 8843.412: 2.1130% ( 56) 00:10:58.443 8843.412 - 8896.051: 2.6554% ( 67) 00:10:58.443 8896.051 - 8948.691: 3.6108% ( 118) 00:10:58.443 8948.691 - 9001.330: 4.7280% ( 138) 00:10:58.443 9001.330 - 9053.969: 5.9585% ( 152) 00:10:58.443 9053.969 - 9106.609: 7.5210% ( 193) 00:10:58.443 9106.609 - 9159.248: 9.8122% ( 283) 00:10:58.443 9159.248 - 9211.888: 12.2733% ( 304) 00:10:58.443 9211.888 - 9264.527: 14.9692% ( 333) 00:10:58.443 9264.527 - 9317.166: 17.8271% ( 353) 00:10:58.443 9317.166 - 9369.806: 21.4297% ( 445) 00:10:58.443 9369.806 - 9422.445: 24.8138% ( 418) 00:10:58.443 9422.445 - 9475.084: 27.8335% ( 373) 00:10:58.443 9475.084 - 9527.724: 31.3633% ( 436) 00:10:58.443 9527.724 - 9580.363: 34.3993% ( 375) 00:10:58.443 9580.363 - 9633.002: 37.7429% ( 413) 00:10:58.443 9633.002 - 9685.642: 41.2970% ( 439) 00:10:58.443 9685.642 - 9738.281: 44.6486% ( 414) 00:10:58.443 9738.281 - 9790.920: 48.8261% ( 516) 00:10:58.443 9790.920 - 9843.560: 52.8983% ( 503) 00:10:58.443 9843.560 - 9896.199: 56.2338% ( 412) 00:10:58.443 9896.199 - 9948.839: 60.1279% ( 481) 00:10:58.443 9948.839 - 10001.478: 64.1677% ( 499) 00:10:58.443 10001.478 - 10054.117: 67.8433% ( 454) 00:10:58.443 10054.117 - 10106.757: 71.2111% ( 416) 00:10:58.443 10106.757 - 10159.396: 73.9071% ( 333) 00:10:58.443 10159.396 - 10212.035: 76.7244% ( 348) 00:10:58.443 10212.035 - 10264.675: 78.8212% ( 259) 00:10:58.443 10264.675 - 10317.314: 80.7481% ( 238) 00:10:58.443 10317.314 - 10369.953: 82.8692% ( 262) 00:10:58.443 10369.953 - 10422.593: 84.2212% ( 167) 00:10:58.443 10422.593 - 10475.232: 85.5084% ( 159) 00:10:58.443 10475.232 - 10527.871: 86.9576% ( 179) 00:10:58.443 10527.871 - 10580.511: 88.1153% ( 143) 00:10:58.443 10580.511 - 10633.150: 89.4997% ( 171) 00:10:58.443 10633.150 - 10685.790: 90.2526% ( 93) 00:10:58.443 10685.790 - 10738.429: 91.0298% ( 96) 00:10:58.443 10738.429 - 10791.068: 91.4589% ( 53) 00:10:58.443 10791.068 - 10843.708: 91.7746% ( 39) 00:10:58.443 10843.708 - 10896.347: 92.1065% ( 41) 00:10:58.443 10896.347 - 10948.986: 92.3251% ( 27) 00:10:58.443 10948.986 - 11001.626: 92.6328% ( 38) 00:10:58.443 11001.626 - 11054.265: 92.9161% ( 35) 00:10:58.443 11054.265 - 11106.904: 93.0376% ( 15) 00:10:58.443 11106.904 - 11159.544: 93.0780% ( 5) 00:10:58.443 11159.544 - 11212.183: 93.1104% ( 4) 00:10:58.443 11212.183 - 11264.822: 93.1509% ( 5) 00:10:58.443 11264.822 - 11317.462: 93.1995% ( 6) 00:10:58.443 11317.462 - 11370.101: 93.2562% ( 7) 00:10:58.443 11370.101 - 11422.741: 93.3047% ( 6) 00:10:58.443 11422.741 - 11475.380: 93.3533% ( 6) 00:10:58.443 11475.380 - 11528.019: 93.3857% ( 4) 00:10:58.443 11528.019 - 11580.659: 93.4424% ( 7) 00:10:58.443 11580.659 - 11633.298: 93.6367% ( 24) 00:10:58.443 11633.298 - 11685.937: 93.6609% ( 3) 00:10:58.443 11685.937 - 11738.577: 93.6771% ( 2) 00:10:58.443 11738.577 - 11791.216: 93.7014% ( 3) 00:10:58.443 11791.216 - 11843.855: 93.7257% ( 3) 00:10:58.443 11843.855 - 11896.495: 93.7581% ( 4) 00:10:58.443 11896.495 - 11949.134: 93.7824% ( 3) 00:10:58.443 11949.134 - 12001.773: 93.7986% ( 2) 00:10:58.443 12107.052 - 12159.692: 93.8067% ( 1) 00:10:58.443 12159.692 - 12212.331: 93.8229% ( 2) 00:10:58.443 12212.331 - 12264.970: 93.8472% ( 3) 00:10:58.443 12264.970 - 12317.610: 93.8552% ( 1) 00:10:58.443 12317.610 - 12370.249: 93.8795% ( 3) 00:10:58.443 12370.249 - 12422.888: 93.8957% ( 2) 00:10:58.443 12422.888 - 12475.528: 93.9200% ( 3) 00:10:58.443 12475.528 - 12528.167: 93.9686% ( 6) 00:10:58.443 12528.167 - 12580.806: 94.0253% ( 7) 00:10:58.443 12580.806 - 12633.446: 94.1224% ( 12) 00:10:58.443 12633.446 - 12686.085: 94.2519% ( 16) 00:10:58.443 12686.085 - 12738.724: 94.3896% ( 17) 00:10:58.443 12738.724 - 12791.364: 94.5920% ( 25) 00:10:58.443 12791.364 - 12844.003: 94.7053% ( 14) 00:10:58.443 12844.003 - 12896.643: 94.9644% ( 32) 00:10:58.443 12896.643 - 12949.282: 95.0130% ( 6) 00:10:58.443 12949.282 - 13001.921: 95.0696% ( 7) 00:10:58.443 13001.921 - 13054.561: 95.1263% ( 7) 00:10:58.443 13054.561 - 13107.200: 95.1668% ( 5) 00:10:58.443 13107.200 - 13159.839: 95.1992% ( 4) 00:10:58.443 13159.839 - 13212.479: 95.2234% ( 3) 00:10:58.443 13212.479 - 13265.118: 95.2558% ( 4) 00:10:58.443 13265.118 - 13317.757: 95.2963% ( 5) 00:10:58.443 13317.757 - 13370.397: 95.3287% ( 4) 00:10:58.443 13370.397 - 13423.036: 95.3368% ( 1) 00:10:58.443 13423.036 - 13475.676: 95.3611% ( 3) 00:10:58.443 13475.676 - 13580.954: 95.4177% ( 7) 00:10:58.443 13580.954 - 13686.233: 95.4744% ( 7) 00:10:58.443 13686.233 - 13791.512: 95.5311% ( 7) 00:10:58.443 13791.512 - 13896.790: 95.7497% ( 27) 00:10:58.443 13896.790 - 14002.069: 95.8063% ( 7) 00:10:58.443 14002.069 - 14107.348: 95.8387% ( 4) 00:10:58.443 14107.348 - 14212.627: 95.8549% ( 2) 00:10:58.443 14317.905 - 14423.184: 95.8630% ( 1) 00:10:58.443 14423.184 - 14528.463: 95.8873% ( 3) 00:10:58.443 14528.463 - 14633.741: 96.0168% ( 16) 00:10:58.443 14633.741 - 14739.020: 96.2597% ( 30) 00:10:58.443 14739.020 - 14844.299: 96.4216% ( 20) 00:10:58.443 14844.299 - 14949.578: 96.5835% ( 20) 00:10:58.443 14949.578 - 15054.856: 96.6564% ( 9) 00:10:58.443 15054.856 - 15160.135: 96.7050% ( 6) 00:10:58.443 15160.135 - 15265.414: 96.7212% ( 2) 00:10:58.443 15265.414 - 15370.692: 96.7536% ( 4) 00:10:58.443 15370.692 - 15475.971: 96.8507% ( 12) 00:10:58.443 15475.971 - 15581.250: 96.9964% ( 18) 00:10:58.443 15581.250 - 15686.529: 97.1260% ( 16) 00:10:58.443 15686.529 - 15791.807: 97.1826% ( 7) 00:10:58.443 15791.807 - 15897.086: 97.2474% ( 8) 00:10:58.443 15897.086 - 16002.365: 97.2798% ( 4) 00:10:58.443 16002.365 - 16107.643: 97.3284% ( 6) 00:10:58.443 16107.643 - 16212.922: 97.3850% ( 7) 00:10:58.443 16212.922 - 16318.201: 97.4093% ( 3) 00:10:58.443 18107.939 - 18213.218: 97.4579% ( 6) 00:10:58.443 18213.218 - 18318.496: 97.5227% ( 8) 00:10:58.443 18318.496 - 18423.775: 97.5712% ( 6) 00:10:58.443 18423.775 - 18529.054: 97.6036% ( 4) 00:10:58.443 18529.054 - 18634.333: 97.6927% ( 11) 00:10:58.443 18634.333 - 18739.611: 97.8303% ( 17) 00:10:58.443 18739.611 - 18844.890: 98.0813% ( 31) 00:10:58.443 18844.890 - 18950.169: 98.2108% ( 16) 00:10:58.443 18950.169 - 19055.447: 98.2837% ( 9) 00:10:58.443 19055.447 - 19160.726: 98.3161% ( 4) 00:10:58.443 19160.726 - 19266.005: 98.3484% ( 4) 00:10:58.443 19266.005 - 19371.284: 98.3808% ( 4) 00:10:58.443 19371.284 - 19476.562: 98.4132% ( 4) 00:10:58.443 19476.562 - 19581.841: 98.4456% ( 4) 00:10:58.443 24108.826 - 24214.104: 98.4780% ( 4) 00:10:58.443 24214.104 - 24319.383: 98.5427% ( 8) 00:10:58.443 24319.383 - 24424.662: 98.6399% ( 12) 00:10:58.443 24424.662 - 24529.941: 98.8018% ( 20) 00:10:58.443 24529.941 - 24635.219: 98.8909% ( 11) 00:10:58.443 24635.219 - 24740.498: 98.9556% ( 8) 00:10:58.443 24740.498 - 24845.777: 98.9637% ( 1) 00:10:58.443 31373.057 - 31583.614: 98.9961% ( 4) 00:10:58.443 31583.614 - 31794.172: 99.0447% ( 6) 00:10:58.443 31794.172 - 32004.729: 99.1014% ( 7) 00:10:58.443 32004.729 - 32215.287: 99.1580% ( 7) 00:10:58.443 32215.287 - 32425.844: 99.2066% ( 6) 00:10:58.443 32425.844 - 32636.402: 99.2633% ( 7) 00:10:58.443 32636.402 - 32846.959: 99.3119% ( 6) 00:10:58.443 32846.959 - 33057.516: 99.3685% ( 7) 00:10:58.443 33057.516 - 33268.074: 99.4171% ( 6) 00:10:58.443 33268.074 - 33478.631: 99.4657% ( 6) 00:10:58.443 33478.631 - 33689.189: 99.4819% ( 2) 00:10:58.443 40005.912 - 40216.469: 99.5304% ( 6) 00:10:58.443 40216.469 - 40427.027: 99.5790% ( 6) 00:10:58.443 40427.027 - 40637.584: 99.6195% ( 5) 00:10:58.443 40637.584 - 40848.141: 99.6762% ( 7) 00:10:58.443 40848.141 - 41058.699: 99.7247% ( 6) 00:10:58.443 41058.699 - 41269.256: 99.7733% ( 6) 00:10:58.443 41269.256 - 41479.814: 99.8219% ( 6) 00:10:58.443 41479.814 - 41690.371: 99.8705% ( 6) 00:10:58.443 41690.371 - 41900.929: 99.9190% ( 6) 00:10:58.444 41900.929 - 42111.486: 99.9595% ( 5) 00:10:58.444 42111.486 - 42322.043: 100.0000% ( 5) 00:10:58.444 00:10:58.444 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:58.444 ============================================================================== 00:10:58.444 Range in us Cumulative IO count 00:10:58.444 8211.740 - 8264.379: 0.0081% ( 1) 00:10:58.444 8264.379 - 8317.018: 0.0162% ( 1) 00:10:58.444 8369.658 - 8422.297: 0.0243% ( 1) 00:10:58.444 8422.297 - 8474.937: 0.0972% ( 9) 00:10:58.444 8474.937 - 8527.576: 0.2348% ( 17) 00:10:58.444 8527.576 - 8580.215: 0.3724% ( 17) 00:10:58.444 8580.215 - 8632.855: 0.5991% ( 28) 00:10:58.444 8632.855 - 8685.494: 0.9553% ( 44) 00:10:58.444 8685.494 - 8738.133: 1.3034% ( 43) 00:10:58.444 8738.133 - 8790.773: 1.6597% ( 44) 00:10:58.444 8790.773 - 8843.412: 2.2102% ( 68) 00:10:58.444 8843.412 - 8896.051: 2.8174% ( 75) 00:10:58.444 8896.051 - 8948.691: 3.3112% ( 61) 00:10:58.444 8948.691 - 9001.330: 4.1451% ( 103) 00:10:58.444 9001.330 - 9053.969: 5.0518% ( 112) 00:10:58.444 9053.969 - 9106.609: 6.4362% ( 171) 00:10:58.444 9106.609 - 9159.248: 8.6383% ( 272) 00:10:58.444 9159.248 - 9211.888: 10.9132% ( 281) 00:10:58.444 9211.888 - 9264.527: 13.6820% ( 342) 00:10:58.444 9264.527 - 9317.166: 16.7989% ( 385) 00:10:58.444 9317.166 - 9369.806: 20.0939% ( 407) 00:10:58.444 9369.806 - 9422.445: 23.7290% ( 449) 00:10:58.444 9422.445 - 9475.084: 27.3802% ( 451) 00:10:58.444 9475.084 - 9527.724: 31.6305% ( 525) 00:10:58.444 9527.724 - 9580.363: 35.3303% ( 457) 00:10:58.444 9580.363 - 9633.002: 39.2568% ( 485) 00:10:58.444 9633.002 - 9685.642: 43.7419% ( 554) 00:10:58.444 9685.642 - 9738.281: 47.7413% ( 494) 00:10:58.444 9738.281 - 9790.920: 51.1496% ( 421) 00:10:58.444 9790.920 - 9843.560: 54.3799% ( 399) 00:10:58.444 9843.560 - 9896.199: 57.8449% ( 428) 00:10:58.444 9896.199 - 9948.839: 61.2370% ( 419) 00:10:58.444 9948.839 - 10001.478: 65.2445% ( 495) 00:10:58.444 10001.478 - 10054.117: 68.1995% ( 365) 00:10:58.444 10054.117 - 10106.757: 71.8507% ( 451) 00:10:58.444 10106.757 - 10159.396: 74.2066% ( 291) 00:10:58.444 10159.396 - 10212.035: 76.9997% ( 345) 00:10:58.444 10212.035 - 10264.675: 79.4203% ( 299) 00:10:58.444 10264.675 - 10317.314: 81.4767% ( 254) 00:10:58.444 10317.314 - 10369.953: 83.0311% ( 192) 00:10:58.444 10369.953 - 10422.593: 84.5369% ( 186) 00:10:58.444 10422.593 - 10475.232: 86.4233% ( 233) 00:10:58.444 10475.232 - 10527.871: 87.5891% ( 144) 00:10:58.444 10527.871 - 10580.511: 88.4067% ( 101) 00:10:58.444 10580.511 - 10633.150: 89.1030% ( 86) 00:10:58.444 10633.150 - 10685.790: 89.7021% ( 74) 00:10:58.444 10685.790 - 10738.429: 90.0988% ( 49) 00:10:58.444 10738.429 - 10791.068: 90.4307% ( 41) 00:10:58.444 10791.068 - 10843.708: 90.8193% ( 48) 00:10:58.444 10843.708 - 10896.347: 91.2484% ( 53) 00:10:58.444 10896.347 - 10948.986: 91.6289% ( 47) 00:10:58.444 10948.986 - 11001.626: 91.9446% ( 39) 00:10:58.444 11001.626 - 11054.265: 92.2361% ( 36) 00:10:58.444 11054.265 - 11106.904: 92.5113% ( 34) 00:10:58.444 11106.904 - 11159.544: 92.9485% ( 54) 00:10:58.444 11159.544 - 11212.183: 93.1428% ( 24) 00:10:58.444 11212.183 - 11264.822: 93.2319% ( 11) 00:10:58.444 11264.822 - 11317.462: 93.4424% ( 26) 00:10:58.444 11317.462 - 11370.101: 93.5233% ( 10) 00:10:58.444 11370.101 - 11422.741: 93.5800% ( 7) 00:10:58.444 11422.741 - 11475.380: 93.6286% ( 6) 00:10:58.444 11475.380 - 11528.019: 93.6609% ( 4) 00:10:58.444 11528.019 - 11580.659: 93.6852% ( 3) 00:10:58.444 11580.659 - 11633.298: 93.7095% ( 3) 00:10:58.444 11633.298 - 11685.937: 93.7257% ( 2) 00:10:58.444 11685.937 - 11738.577: 93.7419% ( 2) 00:10:58.444 11738.577 - 11791.216: 93.7581% ( 2) 00:10:58.444 11791.216 - 11843.855: 93.7743% ( 2) 00:10:58.444 11843.855 - 11896.495: 93.7824% ( 1) 00:10:58.444 12422.888 - 12475.528: 93.7986% ( 2) 00:10:58.444 12475.528 - 12528.167: 93.8310% ( 4) 00:10:58.444 12528.167 - 12580.806: 93.8876% ( 7) 00:10:58.444 12580.806 - 12633.446: 93.9686% ( 10) 00:10:58.444 12633.446 - 12686.085: 94.0900% ( 15) 00:10:58.444 12686.085 - 12738.724: 94.3167% ( 28) 00:10:58.444 12738.724 - 12791.364: 94.3896% ( 9) 00:10:58.444 12791.364 - 12844.003: 94.4705% ( 10) 00:10:58.444 12844.003 - 12896.643: 94.6405% ( 21) 00:10:58.444 12896.643 - 12949.282: 94.7539% ( 14) 00:10:58.444 12949.282 - 13001.921: 94.8672% ( 14) 00:10:58.444 13001.921 - 13054.561: 94.9482% ( 10) 00:10:58.444 13054.561 - 13107.200: 95.0372% ( 11) 00:10:58.444 13107.200 - 13159.839: 95.1749% ( 17) 00:10:58.444 13159.839 - 13212.479: 95.3125% ( 17) 00:10:58.444 13212.479 - 13265.118: 95.5554% ( 30) 00:10:58.444 13265.118 - 13317.757: 95.6606% ( 13) 00:10:58.444 13317.757 - 13370.397: 95.7011% ( 5) 00:10:58.444 13370.397 - 13423.036: 95.7335% ( 4) 00:10:58.444 13423.036 - 13475.676: 95.7659% ( 4) 00:10:58.444 13475.676 - 13580.954: 95.8306% ( 8) 00:10:58.444 13580.954 - 13686.233: 95.8549% ( 3) 00:10:58.444 14212.627 - 14317.905: 95.9116% ( 7) 00:10:58.444 14317.905 - 14423.184: 95.9926% ( 10) 00:10:58.444 14423.184 - 14528.463: 96.0573% ( 8) 00:10:58.444 14528.463 - 14633.741: 96.1059% ( 6) 00:10:58.444 14633.741 - 14739.020: 96.1383% ( 4) 00:10:58.444 14739.020 - 14844.299: 96.1545% ( 2) 00:10:58.444 14844.299 - 14949.578: 96.1869% ( 4) 00:10:58.444 14949.578 - 15054.856: 96.2111% ( 3) 00:10:58.444 15054.856 - 15160.135: 96.2921% ( 10) 00:10:58.444 15160.135 - 15265.414: 96.4621% ( 21) 00:10:58.444 15265.414 - 15370.692: 96.6969% ( 29) 00:10:58.444 15370.692 - 15475.971: 96.8912% ( 24) 00:10:58.444 15475.971 - 15581.250: 97.0369% ( 18) 00:10:58.444 15581.250 - 15686.529: 97.1503% ( 14) 00:10:58.444 15686.529 - 15791.807: 97.2474% ( 12) 00:10:58.444 15791.807 - 15897.086: 97.2798% ( 4) 00:10:58.444 15897.086 - 16002.365: 97.3122% ( 4) 00:10:58.444 16002.365 - 16107.643: 97.3365% ( 3) 00:10:58.444 16107.643 - 16212.922: 97.3688% ( 4) 00:10:58.444 16212.922 - 16318.201: 97.4093% ( 5) 00:10:58.444 17581.545 - 17686.824: 97.4174% ( 1) 00:10:58.444 17686.824 - 17792.103: 97.4822% ( 8) 00:10:58.444 17792.103 - 17897.382: 97.5470% ( 8) 00:10:58.444 17897.382 - 18002.660: 97.6117% ( 8) 00:10:58.444 18002.660 - 18107.939: 97.6441% ( 4) 00:10:58.444 18107.939 - 18213.218: 97.6684% ( 3) 00:10:58.444 18213.218 - 18318.496: 97.6927% ( 3) 00:10:58.444 18318.496 - 18423.775: 97.7170% ( 3) 00:10:58.444 18423.775 - 18529.054: 97.7494% ( 4) 00:10:58.444 18529.054 - 18634.333: 97.7898% ( 5) 00:10:58.444 18634.333 - 18739.611: 97.8222% ( 4) 00:10:58.444 18739.611 - 18844.890: 97.8627% ( 5) 00:10:58.444 18844.890 - 18950.169: 97.8951% ( 4) 00:10:58.444 18950.169 - 19055.447: 97.9275% ( 4) 00:10:58.444 19266.005 - 19371.284: 97.9437% ( 2) 00:10:58.444 19371.284 - 19476.562: 98.0408% ( 12) 00:10:58.444 19476.562 - 19581.841: 98.2513% ( 26) 00:10:58.444 19581.841 - 19687.120: 98.3808% ( 16) 00:10:58.444 19687.120 - 19792.398: 98.4456% ( 8) 00:10:58.444 24529.941 - 24635.219: 98.4618% ( 2) 00:10:58.444 24635.219 - 24740.498: 98.5347% ( 9) 00:10:58.444 24740.498 - 24845.777: 98.6318% ( 12) 00:10:58.444 24845.777 - 24951.055: 98.7370% ( 13) 00:10:58.444 24951.055 - 25056.334: 98.8261% ( 11) 00:10:58.444 25056.334 - 25161.613: 98.8828% ( 7) 00:10:58.444 25161.613 - 25266.892: 98.9313% ( 6) 00:10:58.444 25266.892 - 25372.170: 98.9637% ( 4) 00:10:58.444 30320.270 - 30530.827: 99.2228% ( 32) 00:10:58.444 30530.827 - 30741.385: 99.2552% ( 4) 00:10:58.444 30741.385 - 30951.942: 99.2957% ( 5) 00:10:58.444 30951.942 - 31162.500: 99.3361% ( 5) 00:10:58.444 31162.500 - 31373.057: 99.3766% ( 5) 00:10:58.444 31373.057 - 31583.614: 99.4252% ( 6) 00:10:58.444 31583.614 - 31794.172: 99.4657% ( 5) 00:10:58.444 31794.172 - 32004.729: 99.4819% ( 2) 00:10:58.444 36636.993 - 36847.550: 99.5466% ( 8) 00:10:58.444 36847.550 - 37058.108: 99.6276% ( 10) 00:10:58.444 37058.108 - 37268.665: 99.6357% ( 1) 00:10:58.444 37268.665 - 37479.222: 99.6438% ( 1) 00:10:58.444 38321.452 - 38532.010: 99.6762% ( 4) 00:10:58.444 38532.010 - 38742.567: 99.7166% ( 5) 00:10:58.444 38742.567 - 38953.124: 99.7490% ( 4) 00:10:58.444 38953.124 - 39163.682: 99.7895% ( 5) 00:10:58.444 39163.682 - 39374.239: 99.8219% ( 4) 00:10:58.444 39374.239 - 39584.797: 99.8705% ( 6) 00:10:58.444 39584.797 - 39795.354: 99.9190% ( 6) 00:10:58.444 39795.354 - 40005.912: 99.9595% ( 5) 00:10:58.444 40005.912 - 40216.469: 100.0000% ( 5) 00:10:58.444 00:10:58.444 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:58.444 ============================================================================== 00:10:58.444 Range in us Cumulative IO count 00:10:58.444 8211.740 - 8264.379: 0.0081% ( 1) 00:10:58.444 8264.379 - 8317.018: 0.0648% ( 7) 00:10:58.444 8317.018 - 8369.658: 0.1295% ( 8) 00:10:58.444 8369.658 - 8422.297: 0.1862% ( 7) 00:10:58.444 8422.297 - 8474.937: 0.3400% ( 19) 00:10:58.444 8474.937 - 8527.576: 0.4534% ( 14) 00:10:58.444 8527.576 - 8580.215: 0.5910% ( 17) 00:10:58.444 8580.215 - 8632.855: 0.7448% ( 19) 00:10:58.444 8632.855 - 8685.494: 1.0282% ( 35) 00:10:58.444 8685.494 - 8738.133: 1.3439% ( 39) 00:10:58.444 8738.133 - 8790.773: 1.8944% ( 68) 00:10:58.444 8790.773 - 8843.412: 2.6716% ( 96) 00:10:58.444 8843.412 - 8896.051: 3.4812% ( 100) 00:10:58.444 8896.051 - 8948.691: 4.4365% ( 118) 00:10:58.444 8948.691 - 9001.330: 5.4971% ( 131) 00:10:58.444 9001.330 - 9053.969: 6.8491% ( 167) 00:10:58.444 9053.969 - 9106.609: 8.4521% ( 198) 00:10:58.445 9106.609 - 9159.248: 10.7189% ( 280) 00:10:58.445 9159.248 - 9211.888: 12.9534% ( 276) 00:10:58.445 9211.888 - 9264.527: 15.7383% ( 344) 00:10:58.445 9264.527 - 9317.166: 18.8229% ( 381) 00:10:58.445 9317.166 - 9369.806: 22.6036% ( 467) 00:10:58.445 9369.806 - 9422.445: 26.1334% ( 436) 00:10:58.445 9422.445 - 9475.084: 29.9142% ( 467) 00:10:58.445 9475.084 - 9527.724: 33.3306% ( 422) 00:10:58.445 9527.724 - 9580.363: 37.0628% ( 461) 00:10:58.445 9580.363 - 9633.002: 40.1797% ( 385) 00:10:58.445 9633.002 - 9685.642: 43.3371% ( 390) 00:10:58.445 9685.642 - 9738.281: 46.4945% ( 390) 00:10:58.445 9738.281 - 9790.920: 49.6924% ( 395) 00:10:58.445 9790.920 - 9843.560: 53.0036% ( 409) 00:10:58.445 9843.560 - 9896.199: 57.0434% ( 499) 00:10:58.445 9896.199 - 9948.839: 60.7594% ( 459) 00:10:58.445 9948.839 - 10001.478: 64.1354% ( 417) 00:10:58.445 10001.478 - 10054.117: 67.3494% ( 397) 00:10:58.445 10054.117 - 10106.757: 70.2558% ( 359) 00:10:58.445 10106.757 - 10159.396: 72.9841% ( 337) 00:10:58.445 10159.396 - 10212.035: 75.1943% ( 273) 00:10:58.445 10212.035 - 10264.675: 77.1940% ( 247) 00:10:58.445 10264.675 - 10317.314: 79.0722% ( 232) 00:10:58.445 10317.314 - 10369.953: 80.7481% ( 207) 00:10:58.445 10369.953 - 10422.593: 82.6911% ( 240) 00:10:58.445 10422.593 - 10475.232: 84.0512% ( 168) 00:10:58.445 10475.232 - 10527.871: 85.2898% ( 153) 00:10:58.445 10527.871 - 10580.511: 86.5852% ( 160) 00:10:58.445 10580.511 - 10633.150: 87.8319% ( 154) 00:10:58.445 10633.150 - 10685.790: 89.0706% ( 153) 00:10:58.445 10685.790 - 10738.429: 89.8802% ( 100) 00:10:58.445 10738.429 - 10791.068: 90.6655% ( 97) 00:10:58.445 10791.068 - 10843.708: 91.1674% ( 62) 00:10:58.445 10843.708 - 10896.347: 91.5722% ( 50) 00:10:58.445 10896.347 - 10948.986: 91.8637% ( 36) 00:10:58.445 10948.986 - 11001.626: 92.3980% ( 66) 00:10:58.445 11001.626 - 11054.265: 92.7947% ( 49) 00:10:58.445 11054.265 - 11106.904: 92.9161% ( 15) 00:10:58.445 11106.904 - 11159.544: 93.0214% ( 13) 00:10:58.445 11159.544 - 11212.183: 93.1104% ( 11) 00:10:58.445 11212.183 - 11264.822: 93.1752% ( 8) 00:10:58.445 11264.822 - 11317.462: 93.1995% ( 3) 00:10:58.445 11317.462 - 11370.101: 93.2481% ( 6) 00:10:58.445 11370.101 - 11422.741: 93.3290% ( 10) 00:10:58.445 11422.741 - 11475.380: 93.4262% ( 12) 00:10:58.445 11475.380 - 11528.019: 93.5881% ( 20) 00:10:58.445 11528.019 - 11580.659: 93.6528% ( 8) 00:10:58.445 11580.659 - 11633.298: 93.7014% ( 6) 00:10:58.445 11633.298 - 11685.937: 93.7581% ( 7) 00:10:58.445 11685.937 - 11738.577: 93.7824% ( 3) 00:10:58.445 11738.577 - 11791.216: 93.7905% ( 1) 00:10:58.445 12107.052 - 12159.692: 93.7986% ( 1) 00:10:58.445 12159.692 - 12212.331: 93.8391% ( 5) 00:10:58.445 12212.331 - 12264.970: 93.8714% ( 4) 00:10:58.445 12264.970 - 12317.610: 93.9119% ( 5) 00:10:58.445 12317.610 - 12370.249: 93.9848% ( 9) 00:10:58.445 12370.249 - 12422.888: 94.0657% ( 10) 00:10:58.445 12422.888 - 12475.528: 94.3248% ( 32) 00:10:58.445 12475.528 - 12528.167: 94.3896% ( 8) 00:10:58.445 12528.167 - 12580.806: 94.4624% ( 9) 00:10:58.445 12580.806 - 12633.446: 94.5434% ( 10) 00:10:58.445 12633.446 - 12686.085: 94.6324% ( 11) 00:10:58.445 12686.085 - 12738.724: 94.7053% ( 9) 00:10:58.445 12738.724 - 12791.364: 94.7944% ( 11) 00:10:58.445 12791.364 - 12844.003: 94.8834% ( 11) 00:10:58.445 12844.003 - 12896.643: 94.9644% ( 10) 00:10:58.445 12896.643 - 12949.282: 95.0291% ( 8) 00:10:58.445 12949.282 - 13001.921: 95.1992% ( 21) 00:10:58.445 13001.921 - 13054.561: 95.2639% ( 8) 00:10:58.445 13054.561 - 13107.200: 95.3125% ( 6) 00:10:58.445 13107.200 - 13159.839: 95.3530% ( 5) 00:10:58.445 13159.839 - 13212.479: 95.3773% ( 3) 00:10:58.445 13212.479 - 13265.118: 95.4177% ( 5) 00:10:58.445 13265.118 - 13317.757: 95.4582% ( 5) 00:10:58.445 13317.757 - 13370.397: 95.4906% ( 4) 00:10:58.445 13370.397 - 13423.036: 95.5068% ( 2) 00:10:58.445 13423.036 - 13475.676: 95.5473% ( 5) 00:10:58.445 13475.676 - 13580.954: 95.6768% ( 16) 00:10:58.445 13580.954 - 13686.233: 95.7416% ( 8) 00:10:58.445 13686.233 - 13791.512: 95.7821% ( 5) 00:10:58.445 13791.512 - 13896.790: 95.8144% ( 4) 00:10:58.445 13896.790 - 14002.069: 95.8549% ( 5) 00:10:58.445 14107.348 - 14212.627: 95.9116% ( 7) 00:10:58.445 14212.627 - 14317.905: 95.9764% ( 8) 00:10:58.445 14317.905 - 14423.184: 96.0411% ( 8) 00:10:58.445 14423.184 - 14528.463: 96.1059% ( 8) 00:10:58.445 14528.463 - 14633.741: 96.1707% ( 8) 00:10:58.445 14633.741 - 14739.020: 96.1949% ( 3) 00:10:58.445 14739.020 - 14844.299: 96.2273% ( 4) 00:10:58.445 14844.299 - 14949.578: 96.2516% ( 3) 00:10:58.445 14949.578 - 15054.856: 96.2759% ( 3) 00:10:58.445 15054.856 - 15160.135: 96.3812% ( 13) 00:10:58.445 15160.135 - 15265.414: 96.4783% ( 12) 00:10:58.445 15265.414 - 15370.692: 96.5835% ( 13) 00:10:58.445 15370.692 - 15475.971: 96.6645% ( 10) 00:10:58.445 15475.971 - 15581.250: 96.8021% ( 17) 00:10:58.445 15581.250 - 15686.529: 97.0288% ( 28) 00:10:58.445 15686.529 - 15791.807: 97.1503% ( 15) 00:10:58.445 15791.807 - 15897.086: 97.2798% ( 16) 00:10:58.445 15897.086 - 16002.365: 97.3527% ( 9) 00:10:58.445 16002.365 - 16107.643: 97.3931% ( 5) 00:10:58.445 16107.643 - 16212.922: 97.4093% ( 2) 00:10:58.445 16739.316 - 16844.594: 97.4174% ( 1) 00:10:58.445 16844.594 - 16949.873: 97.4417% ( 3) 00:10:58.445 16949.873 - 17055.152: 97.5227% ( 10) 00:10:58.445 17055.152 - 17160.431: 97.5793% ( 7) 00:10:58.445 17160.431 - 17265.709: 97.6036% ( 3) 00:10:58.445 17265.709 - 17370.988: 97.6279% ( 3) 00:10:58.445 17370.988 - 17476.267: 97.6603% ( 4) 00:10:58.445 17476.267 - 17581.545: 97.7008% ( 5) 00:10:58.445 17581.545 - 17686.824: 97.7332% ( 4) 00:10:58.445 17686.824 - 17792.103: 97.7736% ( 5) 00:10:58.445 17792.103 - 17897.382: 97.8141% ( 5) 00:10:58.445 17897.382 - 18002.660: 97.8546% ( 5) 00:10:58.445 18002.660 - 18107.939: 97.8951% ( 5) 00:10:58.445 18107.939 - 18213.218: 97.9275% ( 4) 00:10:58.445 20424.071 - 20529.349: 97.9598% ( 4) 00:10:58.445 20529.349 - 20634.628: 98.0327% ( 9) 00:10:58.445 20634.628 - 20739.907: 98.2432% ( 26) 00:10:58.445 20739.907 - 20845.186: 98.3646% ( 15) 00:10:58.445 20845.186 - 20950.464: 98.4456% ( 10) 00:10:58.445 24845.777 - 24951.055: 98.4780% ( 4) 00:10:58.445 24951.055 - 25056.334: 98.5347% ( 7) 00:10:58.445 25056.334 - 25161.613: 98.5751% ( 5) 00:10:58.445 25161.613 - 25266.892: 98.6237% ( 6) 00:10:58.445 25266.892 - 25372.170: 98.6642% ( 5) 00:10:58.445 25372.170 - 25477.449: 98.7128% ( 6) 00:10:58.445 25477.449 - 25582.728: 98.7613% ( 6) 00:10:58.445 25582.728 - 25688.006: 98.8018% ( 5) 00:10:58.445 25688.006 - 25793.285: 98.8423% ( 5) 00:10:58.445 25793.285 - 25898.564: 98.8909% ( 6) 00:10:58.445 25898.564 - 26003.843: 98.9394% ( 6) 00:10:58.445 26003.843 - 26109.121: 98.9637% ( 3) 00:10:58.445 27161.908 - 27372.466: 99.0123% ( 6) 00:10:58.445 27372.466 - 27583.023: 99.0609% ( 6) 00:10:58.445 27583.023 - 27793.581: 99.1176% ( 7) 00:10:58.445 27793.581 - 28004.138: 99.1661% ( 6) 00:10:58.445 28004.138 - 28214.696: 99.2228% ( 7) 00:10:58.445 28214.696 - 28425.253: 99.2714% ( 6) 00:10:58.445 28425.253 - 28635.810: 99.3199% ( 6) 00:10:58.445 28635.810 - 28846.368: 99.3685% ( 6) 00:10:58.445 28846.368 - 29056.925: 99.4090% ( 5) 00:10:58.445 29056.925 - 29267.483: 99.4576% ( 6) 00:10:58.445 29267.483 - 29478.040: 99.4819% ( 3) 00:10:58.445 35794.763 - 36005.320: 99.5223% ( 5) 00:10:58.445 36005.320 - 36215.878: 99.5709% ( 6) 00:10:58.445 36215.878 - 36426.435: 99.6114% ( 5) 00:10:58.445 36426.435 - 36636.993: 99.6600% ( 6) 00:10:58.445 36636.993 - 36847.550: 99.7085% ( 6) 00:10:58.445 36847.550 - 37058.108: 99.7652% ( 7) 00:10:58.445 37058.108 - 37268.665: 99.8138% ( 6) 00:10:58.445 37268.665 - 37479.222: 99.8624% ( 6) 00:10:58.445 37479.222 - 37689.780: 99.9109% ( 6) 00:10:58.445 37689.780 - 37900.337: 99.9514% ( 5) 00:10:58.445 37900.337 - 38110.895: 100.0000% ( 6) 00:10:58.446 00:10:58.446 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:58.446 ============================================================================== 00:10:58.446 Range in us Cumulative IO count 00:10:58.446 7895.904 - 7948.543: 0.0081% ( 1) 00:10:58.446 8106.461 - 8159.100: 0.0324% ( 3) 00:10:58.446 8159.100 - 8211.740: 0.0648% ( 4) 00:10:58.446 8211.740 - 8264.379: 0.1619% ( 12) 00:10:58.446 8264.379 - 8317.018: 0.2834% ( 15) 00:10:58.446 8317.018 - 8369.658: 0.4777% ( 24) 00:10:58.446 8369.658 - 8422.297: 0.6153% ( 17) 00:10:58.446 8422.297 - 8474.937: 0.7934% ( 22) 00:10:58.446 8474.937 - 8527.576: 0.9148% ( 15) 00:10:58.446 8527.576 - 8580.215: 1.0363% ( 15) 00:10:58.446 8580.215 - 8632.855: 1.1820% ( 18) 00:10:58.446 8632.855 - 8685.494: 1.2872% ( 13) 00:10:58.446 8685.494 - 8738.133: 1.5625% ( 34) 00:10:58.446 8738.133 - 8790.773: 1.8216% ( 32) 00:10:58.446 8790.773 - 8843.412: 2.2021% ( 47) 00:10:58.446 8843.412 - 8896.051: 2.7931% ( 73) 00:10:58.446 8896.051 - 8948.691: 3.5784% ( 97) 00:10:58.446 8948.691 - 9001.330: 4.5418% ( 119) 00:10:58.446 9001.330 - 9053.969: 6.2419% ( 210) 00:10:58.446 9053.969 - 9106.609: 7.6263% ( 171) 00:10:58.446 9106.609 - 9159.248: 9.5855% ( 242) 00:10:58.446 9159.248 - 9211.888: 11.6418% ( 254) 00:10:58.446 9211.888 - 9264.527: 14.0301% ( 295) 00:10:58.446 9264.527 - 9317.166: 17.1956% ( 391) 00:10:58.446 9317.166 - 9369.806: 20.3125% ( 385) 00:10:58.446 9369.806 - 9422.445: 24.1014% ( 468) 00:10:58.446 9422.445 - 9475.084: 28.6593% ( 563) 00:10:58.446 9475.084 - 9527.724: 33.5087% ( 599) 00:10:58.446 9527.724 - 9580.363: 37.4271% ( 484) 00:10:58.446 9580.363 - 9633.002: 42.0742% ( 574) 00:10:58.446 9633.002 - 9685.642: 45.3854% ( 409) 00:10:58.446 9685.642 - 9738.281: 48.8828% ( 432) 00:10:58.446 9738.281 - 9790.920: 52.0321% ( 389) 00:10:58.446 9790.920 - 9843.560: 54.8737% ( 351) 00:10:58.446 9843.560 - 9896.199: 57.5858% ( 335) 00:10:58.446 9896.199 - 9948.839: 60.7594% ( 392) 00:10:58.446 9948.839 - 10001.478: 63.6010% ( 351) 00:10:58.446 10001.478 - 10054.117: 66.7665% ( 391) 00:10:58.446 10054.117 - 10106.757: 69.8915% ( 386) 00:10:58.446 10106.757 - 10159.396: 72.8789% ( 369) 00:10:58.446 10159.396 - 10212.035: 75.6639% ( 344) 00:10:58.446 10212.035 - 10264.675: 77.8821% ( 274) 00:10:58.446 10264.675 - 10317.314: 79.8980% ( 249) 00:10:58.446 10317.314 - 10369.953: 81.6710% ( 219) 00:10:58.446 10369.953 - 10422.593: 83.0311% ( 168) 00:10:58.446 10422.593 - 10475.232: 84.3021% ( 157) 00:10:58.446 10475.232 - 10527.871: 85.5813% ( 158) 00:10:58.446 10527.871 - 10580.511: 86.7147% ( 140) 00:10:58.446 10580.511 - 10633.150: 87.6538% ( 116) 00:10:58.446 10633.150 - 10685.790: 88.6739% ( 126) 00:10:58.446 10685.790 - 10738.429: 89.6130% ( 116) 00:10:58.446 10738.429 - 10791.068: 90.2040% ( 73) 00:10:58.446 10791.068 - 10843.708: 90.6412% ( 54) 00:10:58.446 10843.708 - 10896.347: 91.1350% ( 61) 00:10:58.446 10896.347 - 10948.986: 91.6046% ( 58) 00:10:58.446 10948.986 - 11001.626: 91.8394% ( 29) 00:10:58.446 11001.626 - 11054.265: 92.0256% ( 23) 00:10:58.446 11054.265 - 11106.904: 92.1875% ( 20) 00:10:58.446 11106.904 - 11159.544: 92.2442% ( 7) 00:10:58.446 11159.544 - 11212.183: 92.2847% ( 5) 00:10:58.446 11212.183 - 11264.822: 92.3251% ( 5) 00:10:58.446 11264.822 - 11317.462: 92.3656% ( 5) 00:10:58.446 11317.462 - 11370.101: 92.3737% ( 1) 00:10:58.446 11370.101 - 11422.741: 92.3980% ( 3) 00:10:58.446 11422.741 - 11475.380: 92.4466% ( 6) 00:10:58.446 11475.380 - 11528.019: 92.4790% ( 4) 00:10:58.446 11528.019 - 11580.659: 92.5518% ( 9) 00:10:58.446 11580.659 - 11633.298: 92.6247% ( 9) 00:10:58.446 11633.298 - 11685.937: 92.6733% ( 6) 00:10:58.446 11685.937 - 11738.577: 92.6975% ( 3) 00:10:58.446 11738.577 - 11791.216: 92.7137% ( 2) 00:10:58.446 11791.216 - 11843.855: 92.7866% ( 9) 00:10:58.446 11843.855 - 11896.495: 92.8918% ( 13) 00:10:58.446 11896.495 - 11949.134: 93.0133% ( 15) 00:10:58.446 11949.134 - 12001.773: 93.1590% ( 18) 00:10:58.446 12001.773 - 12054.413: 93.3614% ( 25) 00:10:58.446 12054.413 - 12107.052: 93.5881% ( 28) 00:10:58.446 12107.052 - 12159.692: 93.7743% ( 23) 00:10:58.446 12159.692 - 12212.331: 94.0253% ( 31) 00:10:58.446 12212.331 - 12264.970: 94.1062% ( 10) 00:10:58.446 12264.970 - 12317.610: 94.1386% ( 4) 00:10:58.446 12317.610 - 12370.249: 94.1872% ( 6) 00:10:58.446 12370.249 - 12422.888: 94.2519% ( 8) 00:10:58.446 12422.888 - 12475.528: 94.3086% ( 7) 00:10:58.446 12475.528 - 12528.167: 94.3491% ( 5) 00:10:58.446 12528.167 - 12580.806: 94.3977% ( 6) 00:10:58.446 12580.806 - 12633.446: 94.4301% ( 4) 00:10:58.446 12633.446 - 12686.085: 94.5029% ( 9) 00:10:58.446 12686.085 - 12738.724: 94.6405% ( 17) 00:10:58.446 12738.724 - 12791.364: 94.7134% ( 9) 00:10:58.446 12791.364 - 12844.003: 94.7377% ( 3) 00:10:58.446 12844.003 - 12896.643: 94.7539% ( 2) 00:10:58.446 12896.643 - 12949.282: 94.7701% ( 2) 00:10:58.446 12949.282 - 13001.921: 94.7863% ( 2) 00:10:58.446 13001.921 - 13054.561: 94.8106% ( 3) 00:10:58.446 13054.561 - 13107.200: 94.8187% ( 1) 00:10:58.446 13370.397 - 13423.036: 94.8267% ( 1) 00:10:58.446 13423.036 - 13475.676: 94.8429% ( 2) 00:10:58.446 13475.676 - 13580.954: 94.8915% ( 6) 00:10:58.446 13580.954 - 13686.233: 94.9401% ( 6) 00:10:58.446 13686.233 - 13791.512: 95.1020% ( 20) 00:10:58.446 13791.512 - 13896.790: 95.2073% ( 13) 00:10:58.446 13896.790 - 14002.069: 95.3935% ( 23) 00:10:58.446 14002.069 - 14107.348: 95.5797% ( 23) 00:10:58.446 14107.348 - 14212.627: 95.6768% ( 12) 00:10:58.446 14212.627 - 14317.905: 95.7659% ( 11) 00:10:58.446 14317.905 - 14423.184: 95.9440% ( 22) 00:10:58.446 14423.184 - 14528.463: 96.1707% ( 28) 00:10:58.446 14528.463 - 14633.741: 96.2759% ( 13) 00:10:58.446 14633.741 - 14739.020: 96.3488% ( 9) 00:10:58.446 14739.020 - 14844.299: 96.3731% ( 3) 00:10:58.446 14949.578 - 15054.856: 96.4621% ( 11) 00:10:58.446 15054.856 - 15160.135: 96.5431% ( 10) 00:10:58.446 15160.135 - 15265.414: 96.6240% ( 10) 00:10:58.446 15265.414 - 15370.692: 96.6969% ( 9) 00:10:58.446 15370.692 - 15475.971: 96.7536% ( 7) 00:10:58.446 15475.971 - 15581.250: 96.7778% ( 3) 00:10:58.446 15581.250 - 15686.529: 96.8102% ( 4) 00:10:58.446 15686.529 - 15791.807: 96.8345% ( 3) 00:10:58.446 15791.807 - 15897.086: 96.8750% ( 5) 00:10:58.446 15897.086 - 16002.365: 96.9317% ( 7) 00:10:58.446 16002.365 - 16107.643: 97.1503% ( 27) 00:10:58.446 16107.643 - 16212.922: 97.3122% ( 20) 00:10:58.446 16212.922 - 16318.201: 97.3769% ( 8) 00:10:58.446 16318.201 - 16423.480: 97.4255% ( 6) 00:10:58.446 16423.480 - 16528.758: 97.4903% ( 8) 00:10:58.446 16528.758 - 16634.037: 97.5631% ( 9) 00:10:58.446 16634.037 - 16739.316: 97.6441% ( 10) 00:10:58.446 16739.316 - 16844.594: 97.7170% ( 9) 00:10:58.446 16844.594 - 16949.873: 97.7413% ( 3) 00:10:58.446 16949.873 - 17055.152: 97.7817% ( 5) 00:10:58.446 17055.152 - 17160.431: 97.8141% ( 4) 00:10:58.446 17160.431 - 17265.709: 97.8465% ( 4) 00:10:58.446 17265.709 - 17370.988: 97.8789% ( 4) 00:10:58.446 17370.988 - 17476.267: 97.9113% ( 4) 00:10:58.446 17476.267 - 17581.545: 97.9275% ( 2) 00:10:58.446 20950.464 - 21055.743: 97.9760% ( 6) 00:10:58.446 21055.743 - 21161.022: 98.0489% ( 9) 00:10:58.446 21161.022 - 21266.300: 98.0813% ( 4) 00:10:58.446 21266.300 - 21371.579: 98.1299% ( 6) 00:10:58.446 21371.579 - 21476.858: 98.1784% ( 6) 00:10:58.446 21476.858 - 21582.137: 98.2756% ( 12) 00:10:58.446 21582.137 - 21687.415: 98.4294% ( 19) 00:10:58.446 21687.415 - 21792.694: 98.6075% ( 22) 00:10:58.446 21792.694 - 21897.973: 98.7532% ( 18) 00:10:58.446 21897.973 - 22003.251: 98.8585% ( 13) 00:10:58.446 22003.251 - 22108.530: 98.9394% ( 10) 00:10:58.446 22108.530 - 22213.809: 98.9637% ( 3) 00:10:58.446 26214.400 - 26319.679: 99.1661% ( 25) 00:10:58.446 26319.679 - 26424.957: 99.2066% ( 5) 00:10:58.446 26424.957 - 26530.236: 99.2228% ( 2) 00:10:58.446 26530.236 - 26635.515: 99.2390% ( 2) 00:10:58.446 26635.515 - 26740.794: 99.2633% ( 3) 00:10:58.446 26740.794 - 26846.072: 99.2795% ( 2) 00:10:58.446 26846.072 - 26951.351: 99.2957% ( 2) 00:10:58.446 26951.351 - 27161.908: 99.3361% ( 5) 00:10:58.446 27161.908 - 27372.466: 99.3766% ( 5) 00:10:58.446 27372.466 - 27583.023: 99.4171% ( 5) 00:10:58.446 27583.023 - 27793.581: 99.4657% ( 6) 00:10:58.446 27793.581 - 28004.138: 99.4819% ( 2) 00:10:58.446 32215.287 - 32425.844: 99.5062% ( 3) 00:10:58.446 33689.189 - 33899.746: 99.5385% ( 4) 00:10:58.446 33899.746 - 34110.304: 99.5790% ( 5) 00:10:58.446 34110.304 - 34320.861: 99.6357% ( 7) 00:10:58.446 34320.861 - 34531.418: 99.6843% ( 6) 00:10:58.446 34531.418 - 34741.976: 99.7247% ( 5) 00:10:58.446 34741.976 - 34952.533: 99.7733% ( 6) 00:10:58.446 34952.533 - 35163.091: 99.8138% ( 5) 00:10:58.446 35163.091 - 35373.648: 99.8624% ( 6) 00:10:58.446 35373.648 - 35584.206: 99.9109% ( 6) 00:10:58.446 35584.206 - 35794.763: 99.9595% ( 6) 00:10:58.446 35794.763 - 36005.320: 100.0000% ( 5) 00:10:58.446 00:10:58.446 ************************************ 00:10:58.446 END TEST nvme_perf 00:10:58.446 ************************************ 00:10:58.446 08:23:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:58.446 00:10:58.446 real 0m2.728s 00:10:58.446 user 0m2.305s 00:10:58.446 sys 0m0.324s 00:10:58.446 08:23:45 nvme.nvme_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:58.447 08:23:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:58.447 08:23:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:58.447 08:23:45 nvme -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:10:58.447 08:23:45 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:58.447 08:23:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.447 ************************************ 00:10:58.447 START TEST nvme_hello_world 00:10:58.447 ************************************ 00:10:58.447 08:23:45 nvme.nvme_hello_world -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:58.706 Initializing NVMe Controllers 00:10:58.706 Attached to 0000:00:10.0 00:10:58.706 Namespace ID: 1 size: 6GB 00:10:58.706 Attached to 0000:00:11.0 00:10:58.706 Namespace ID: 1 size: 5GB 00:10:58.706 Attached to 0000:00:13.0 00:10:58.706 Namespace ID: 1 size: 1GB 00:10:58.706 Attached to 0000:00:12.0 00:10:58.706 Namespace ID: 1 size: 4GB 00:10:58.706 Namespace ID: 2 size: 4GB 00:10:58.706 Namespace ID: 3 size: 4GB 00:10:58.706 Initialization complete. 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 INFO: using host memory buffer for IO 00:10:58.706 Hello world! 00:10:58.706 ************************************ 00:10:58.706 END TEST nvme_hello_world 00:10:58.706 ************************************ 00:10:58.706 00:10:58.706 real 0m0.316s 00:10:58.706 user 0m0.139s 00:10:58.706 sys 0m0.136s 00:10:58.706 08:23:46 nvme.nvme_hello_world -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:58.706 08:23:46 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:58.706 08:23:46 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:58.706 08:23:46 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:58.706 08:23:46 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:58.706 08:23:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.706 ************************************ 00:10:58.706 START TEST nvme_sgl 00:10:58.706 ************************************ 00:10:58.706 08:23:46 nvme.nvme_sgl -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:58.965 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:58.965 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:58.965 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:59.223 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:59.223 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:59.223 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:59.223 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:59.223 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:59.223 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:59.224 NVMe Readv/Writev Request test 00:10:59.224 Attached to 0000:00:10.0 00:10:59.224 Attached to 0000:00:11.0 00:10:59.224 Attached to 0000:00:13.0 00:10:59.224 Attached to 0000:00:12.0 00:10:59.224 0000:00:10.0: build_io_request_2 test passed 00:10:59.224 0000:00:10.0: build_io_request_4 test passed 00:10:59.224 0000:00:10.0: build_io_request_5 test passed 00:10:59.224 0000:00:10.0: build_io_request_6 test passed 00:10:59.224 0000:00:10.0: build_io_request_7 test passed 00:10:59.224 0000:00:10.0: build_io_request_10 test passed 00:10:59.224 0000:00:11.0: build_io_request_2 test passed 00:10:59.224 0000:00:11.0: build_io_request_4 test passed 00:10:59.224 0000:00:11.0: build_io_request_5 test passed 00:10:59.224 0000:00:11.0: build_io_request_6 test passed 00:10:59.224 0000:00:11.0: build_io_request_7 test passed 00:10:59.224 0000:00:11.0: build_io_request_10 test passed 00:10:59.224 Cleaning up... 00:10:59.224 ************************************ 00:10:59.224 END TEST nvme_sgl 00:10:59.224 ************************************ 00:10:59.224 00:10:59.224 real 0m0.405s 00:10:59.224 user 0m0.185s 00:10:59.224 sys 0m0.175s 00:10:59.224 08:23:46 nvme.nvme_sgl -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:59.224 08:23:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:59.224 08:23:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:59.224 08:23:46 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:59.224 08:23:46 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:59.224 08:23:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.224 ************************************ 00:10:59.224 START TEST nvme_e2edp 00:10:59.224 ************************************ 00:10:59.224 08:23:46 nvme.nvme_e2edp -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:59.483 NVMe Write/Read with End-to-End data protection test 00:10:59.483 Attached to 0000:00:10.0 00:10:59.483 Attached to 0000:00:11.0 00:10:59.483 Attached to 0000:00:13.0 00:10:59.483 Attached to 0000:00:12.0 00:10:59.483 Cleaning up... 00:10:59.483 ************************************ 00:10:59.483 END TEST nvme_e2edp 00:10:59.483 ************************************ 00:10:59.483 00:10:59.483 real 0m0.311s 00:10:59.483 user 0m0.112s 00:10:59.483 sys 0m0.155s 00:10:59.483 08:23:47 nvme.nvme_e2edp -- common/autotest_common.sh@1133 -- # xtrace_disable 00:10:59.483 08:23:47 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:59.741 08:23:47 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:59.741 08:23:47 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:10:59.741 08:23:47 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:10:59.741 08:23:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.741 ************************************ 00:10:59.741 START TEST nvme_reserve 00:10:59.741 ************************************ 00:10:59.741 08:23:47 nvme.nvme_reserve -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:00.000 ===================================================== 00:11:00.000 NVMe Controller at PCI bus 0, device 16, function 0 00:11:00.000 ===================================================== 00:11:00.000 Reservations: Not Supported 00:11:00.000 ===================================================== 00:11:00.000 NVMe Controller at PCI bus 0, device 17, function 0 00:11:00.000 ===================================================== 00:11:00.000 Reservations: Not Supported 00:11:00.000 ===================================================== 00:11:00.000 NVMe Controller at PCI bus 0, device 19, function 0 00:11:00.000 ===================================================== 00:11:00.000 Reservations: Not Supported 00:11:00.000 ===================================================== 00:11:00.000 NVMe Controller at PCI bus 0, device 18, function 0 00:11:00.000 ===================================================== 00:11:00.000 Reservations: Not Supported 00:11:00.000 Reservation test passed 00:11:00.000 ************************************ 00:11:00.000 END TEST nvme_reserve 00:11:00.000 ************************************ 00:11:00.000 00:11:00.000 real 0m0.331s 00:11:00.000 user 0m0.108s 00:11:00.000 sys 0m0.164s 00:11:00.000 08:23:47 nvme.nvme_reserve -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:00.000 08:23:47 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:00.000 08:23:47 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:00.000 08:23:47 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:11:00.000 08:23:47 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:00.000 08:23:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.000 ************************************ 00:11:00.000 START TEST nvme_err_injection 00:11:00.000 ************************************ 00:11:00.000 08:23:47 nvme.nvme_err_injection -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:00.258 NVMe Error Injection test 00:11:00.258 Attached to 0000:00:10.0 00:11:00.258 Attached to 0000:00:11.0 00:11:00.258 Attached to 0000:00:13.0 00:11:00.258 Attached to 0000:00:12.0 00:11:00.258 0000:00:12.0: get features failed as expected 00:11:00.258 0000:00:10.0: get features failed as expected 00:11:00.258 0000:00:11.0: get features failed as expected 00:11:00.258 0000:00:13.0: get features failed as expected 00:11:00.258 0000:00:11.0: get features successfully as expected 00:11:00.258 0000:00:13.0: get features successfully as expected 00:11:00.258 0000:00:12.0: get features successfully as expected 00:11:00.258 0000:00:10.0: get features successfully as expected 00:11:00.258 0000:00:11.0: read failed as expected 00:11:00.258 0000:00:10.0: read failed as expected 00:11:00.258 0000:00:13.0: read failed as expected 00:11:00.258 0000:00:12.0: read failed as expected 00:11:00.258 0000:00:10.0: read successfully as expected 00:11:00.258 0000:00:11.0: read successfully as expected 00:11:00.258 0000:00:13.0: read successfully as expected 00:11:00.258 0000:00:12.0: read successfully as expected 00:11:00.258 Cleaning up... 00:11:00.517 ************************************ 00:11:00.517 END TEST nvme_err_injection 00:11:00.517 ************************************ 00:11:00.517 00:11:00.517 real 0m0.335s 00:11:00.517 user 0m0.111s 00:11:00.517 sys 0m0.179s 00:11:00.517 08:23:47 nvme.nvme_err_injection -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:00.517 08:23:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:00.517 08:23:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:00.517 08:23:47 nvme -- common/autotest_common.sh@1108 -- # '[' 9 -le 1 ']' 00:11:00.517 08:23:47 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:00.517 08:23:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.517 ************************************ 00:11:00.517 START TEST nvme_overhead 00:11:00.517 ************************************ 00:11:00.517 08:23:47 nvme.nvme_overhead -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:01.893 Initializing NVMe Controllers 00:11:01.893 Attached to 0000:00:10.0 00:11:01.893 Attached to 0000:00:11.0 00:11:01.893 Attached to 0000:00:13.0 00:11:01.893 Attached to 0000:00:12.0 00:11:01.893 Initialization complete. Launching workers. 00:11:01.893 submit (in ns) avg, min, max = 14084.9, 11609.6, 99803.2 00:11:01.893 complete (in ns) avg, min, max = 8852.6, 7800.0, 106208.8 00:11:01.893 00:11:01.893 Submit histogram 00:11:01.893 ================ 00:11:01.893 Range in us Cumulative Count 00:11:01.893 11.566 - 11.618: 0.0167% ( 1) 00:11:01.893 12.440 - 12.492: 0.0334% ( 1) 00:11:01.893 12.543 - 12.594: 0.0501% ( 1) 00:11:01.893 12.594 - 12.646: 0.2169% ( 10) 00:11:01.893 12.646 - 12.697: 0.3504% ( 8) 00:11:01.893 12.697 - 12.749: 0.7008% ( 21) 00:11:01.893 12.749 - 12.800: 1.1680% ( 28) 00:11:01.893 12.800 - 12.851: 1.8021% ( 38) 00:11:01.893 12.851 - 12.903: 2.3527% ( 33) 00:11:01.893 12.903 - 12.954: 3.1036% ( 45) 00:11:01.893 12.954 - 13.006: 3.8545% ( 45) 00:11:01.893 13.006 - 13.057: 4.5553% ( 42) 00:11:01.893 13.057 - 13.108: 5.7233% ( 70) 00:11:01.893 13.108 - 13.160: 6.9748% ( 75) 00:11:01.893 13.160 - 13.263: 10.2787% ( 198) 00:11:01.893 13.263 - 13.365: 15.4514% ( 310) 00:11:01.893 13.365 - 13.468: 23.8445% ( 503) 00:11:01.893 13.468 - 13.571: 34.5236% ( 640) 00:11:01.893 13.571 - 13.674: 45.9369% ( 684) 00:11:01.893 13.674 - 13.777: 57.0165% ( 664) 00:11:01.893 13.777 - 13.880: 66.9781% ( 597) 00:11:01.893 13.880 - 13.982: 75.6049% ( 517) 00:11:01.893 13.982 - 14.085: 81.9957% ( 383) 00:11:01.893 14.085 - 14.188: 86.1672% ( 250) 00:11:01.893 14.188 - 14.291: 89.1206% ( 177) 00:11:01.893 14.291 - 14.394: 90.9061% ( 107) 00:11:01.893 14.394 - 14.496: 92.1742% ( 76) 00:11:01.893 14.496 - 14.599: 92.8416% ( 40) 00:11:01.893 14.599 - 14.702: 93.2588% ( 25) 00:11:01.893 14.702 - 14.805: 93.3923% ( 8) 00:11:01.893 14.805 - 14.908: 93.4924% ( 6) 00:11:01.893 14.908 - 15.010: 93.6259% ( 8) 00:11:01.893 15.010 - 15.113: 93.6926% ( 4) 00:11:01.893 15.113 - 15.216: 93.7761% ( 5) 00:11:01.893 15.216 - 15.319: 93.8261% ( 3) 00:11:01.893 15.319 - 15.422: 93.8762% ( 3) 00:11:01.893 15.524 - 15.627: 93.9262% ( 3) 00:11:01.893 15.627 - 15.730: 94.0264% ( 6) 00:11:01.893 15.730 - 15.833: 94.1098% ( 5) 00:11:01.893 15.833 - 15.936: 94.1765% ( 4) 00:11:01.893 15.936 - 16.039: 94.1932% ( 1) 00:11:01.893 16.039 - 16.141: 94.2099% ( 1) 00:11:01.893 16.141 - 16.244: 94.2767% ( 4) 00:11:01.893 16.244 - 16.347: 94.3100% ( 2) 00:11:01.893 16.347 - 16.450: 94.3434% ( 2) 00:11:01.893 16.450 - 16.553: 94.3768% ( 2) 00:11:01.893 16.553 - 16.655: 94.4101% ( 2) 00:11:01.893 16.758 - 16.861: 94.4602% ( 3) 00:11:01.893 16.861 - 16.964: 94.4936% ( 2) 00:11:01.893 16.964 - 17.067: 94.5103% ( 1) 00:11:01.893 17.067 - 17.169: 94.5770% ( 4) 00:11:01.893 17.169 - 17.272: 94.6438% ( 4) 00:11:01.893 17.272 - 17.375: 94.6771% ( 2) 00:11:01.893 17.375 - 17.478: 94.7272% ( 3) 00:11:01.893 17.478 - 17.581: 94.8440% ( 7) 00:11:01.893 17.581 - 17.684: 94.8940% ( 3) 00:11:01.893 17.684 - 17.786: 94.9608% ( 4) 00:11:01.893 17.786 - 17.889: 95.0275% ( 4) 00:11:01.893 17.889 - 17.992: 95.1777% ( 9) 00:11:01.893 17.992 - 18.095: 95.2945% ( 7) 00:11:01.893 18.095 - 18.198: 95.5615% ( 16) 00:11:01.894 18.198 - 18.300: 95.7951% ( 14) 00:11:01.894 18.300 - 18.403: 96.0120% ( 13) 00:11:01.894 18.403 - 18.506: 96.1956% ( 11) 00:11:01.894 18.506 - 18.609: 96.3791% ( 11) 00:11:01.894 18.609 - 18.712: 96.5960% ( 13) 00:11:01.894 18.712 - 18.814: 96.6795% ( 5) 00:11:01.894 18.814 - 18.917: 96.8129% ( 8) 00:11:01.894 18.917 - 19.020: 96.9798% ( 10) 00:11:01.894 19.020 - 19.123: 97.0799% ( 6) 00:11:01.894 19.123 - 19.226: 97.2468% ( 10) 00:11:01.894 19.226 - 19.329: 97.3803% ( 8) 00:11:01.894 19.329 - 19.431: 97.5305% ( 9) 00:11:01.894 19.431 - 19.534: 97.6806% ( 9) 00:11:01.894 19.534 - 19.637: 97.8975% ( 13) 00:11:01.894 19.637 - 19.740: 98.0144% ( 7) 00:11:01.894 19.740 - 19.843: 98.2146% ( 12) 00:11:01.894 19.843 - 19.945: 98.3147% ( 6) 00:11:01.894 19.945 - 20.048: 98.4148% ( 6) 00:11:01.894 20.048 - 20.151: 98.4982% ( 5) 00:11:01.894 20.151 - 20.254: 98.6484% ( 9) 00:11:01.894 20.254 - 20.357: 98.7652% ( 7) 00:11:01.894 20.357 - 20.459: 98.8320% ( 4) 00:11:01.894 20.459 - 20.562: 98.8820% ( 3) 00:11:01.894 20.562 - 20.665: 98.9154% ( 2) 00:11:01.894 20.665 - 20.768: 98.9488% ( 2) 00:11:01.894 20.768 - 20.871: 98.9988% ( 3) 00:11:01.894 21.179 - 21.282: 99.0322% ( 2) 00:11:01.894 21.282 - 21.385: 99.0989% ( 4) 00:11:01.894 21.385 - 21.488: 99.1490% ( 3) 00:11:01.894 21.488 - 21.590: 99.1991% ( 3) 00:11:01.894 21.590 - 21.693: 99.2324% ( 2) 00:11:01.894 21.693 - 21.796: 99.2658% ( 2) 00:11:01.894 21.796 - 21.899: 99.3326% ( 4) 00:11:01.894 21.899 - 22.002: 99.3659% ( 2) 00:11:01.894 22.002 - 22.104: 99.3826% ( 1) 00:11:01.894 22.104 - 22.207: 99.3993% ( 1) 00:11:01.894 22.310 - 22.413: 99.4327% ( 2) 00:11:01.894 22.413 - 22.516: 99.4660% ( 2) 00:11:01.894 22.516 - 22.618: 99.5161% ( 3) 00:11:01.894 22.618 - 22.721: 99.5328% ( 1) 00:11:01.894 22.824 - 22.927: 99.5662% ( 2) 00:11:01.894 22.927 - 23.030: 99.5828% ( 1) 00:11:01.894 23.235 - 23.338: 99.6162% ( 2) 00:11:01.894 23.544 - 23.647: 99.6329% ( 1) 00:11:01.894 23.647 - 23.749: 99.6496% ( 1) 00:11:01.894 24.058 - 24.161: 99.6663% ( 1) 00:11:01.894 24.366 - 24.469: 99.6996% ( 2) 00:11:01.894 24.983 - 25.086: 99.7163% ( 1) 00:11:01.894 25.497 - 25.600: 99.7330% ( 1) 00:11:01.894 25.703 - 25.806: 99.7497% ( 1) 00:11:01.894 26.217 - 26.320: 99.7664% ( 1) 00:11:01.894 27.553 - 27.759: 99.7998% ( 2) 00:11:01.894 28.993 - 29.198: 99.8165% ( 1) 00:11:01.894 29.404 - 29.610: 99.8331% ( 1) 00:11:01.894 30.432 - 30.638: 99.8498% ( 1) 00:11:01.894 31.255 - 31.460: 99.8665% ( 1) 00:11:01.894 33.105 - 33.311: 99.8832% ( 1) 00:11:01.894 43.386 - 43.592: 99.8999% ( 1) 00:11:01.894 48.321 - 48.527: 99.9166% ( 1) 00:11:01.894 48.938 - 49.144: 99.9333% ( 1) 00:11:01.894 51.406 - 51.611: 99.9499% ( 1) 00:11:01.894 68.267 - 68.678: 99.9666% ( 1) 00:11:01.894 92.530 - 92.941: 99.9833% ( 1) 00:11:01.894 99.521 - 99.933: 100.0000% ( 1) 00:11:01.894 00:11:01.894 Complete histogram 00:11:01.894 ================== 00:11:01.894 Range in us Cumulative Count 00:11:01.894 7.762 - 7.814: 0.0334% ( 2) 00:11:01.894 7.814 - 7.865: 1.1013% ( 64) 00:11:01.894 7.865 - 7.916: 6.1739% ( 304) 00:11:01.894 7.916 - 7.968: 13.9162% ( 464) 00:11:01.894 7.968 - 8.019: 22.2426% ( 499) 00:11:01.894 8.019 - 8.071: 30.9695% ( 523) 00:11:01.894 8.071 - 8.122: 37.8275% ( 411) 00:11:01.894 8.122 - 8.173: 41.8822% ( 243) 00:11:01.894 8.173 - 8.225: 43.8178% ( 116) 00:11:01.894 8.225 - 8.276: 44.9024% ( 65) 00:11:01.894 8.276 - 8.328: 45.7534% ( 51) 00:11:01.894 8.328 - 8.379: 46.4375% ( 41) 00:11:01.894 8.379 - 8.431: 47.0048% ( 34) 00:11:01.894 8.431 - 8.482: 47.6556% ( 39) 00:11:01.894 8.482 - 8.533: 49.2074% ( 93) 00:11:01.894 8.533 - 8.585: 51.7771% ( 154) 00:11:01.894 8.585 - 8.636: 55.7484% ( 238) 00:11:01.894 8.636 - 8.688: 60.6875% ( 296) 00:11:01.894 8.688 - 8.739: 65.6266% ( 296) 00:11:01.894 8.739 - 8.790: 70.9328% ( 318) 00:11:01.894 8.790 - 8.842: 75.3045% ( 262) 00:11:01.894 8.842 - 8.893: 79.1423% ( 230) 00:11:01.894 8.893 - 8.945: 82.2126% ( 184) 00:11:01.894 8.945 - 8.996: 84.5320% ( 139) 00:11:01.894 8.996 - 9.047: 86.5009% ( 118) 00:11:01.894 9.047 - 9.099: 88.2029% ( 102) 00:11:01.894 9.099 - 9.150: 89.6546% ( 87) 00:11:01.894 9.150 - 9.202: 90.9728% ( 79) 00:11:01.894 9.202 - 9.253: 91.8071% ( 50) 00:11:01.894 9.253 - 9.304: 92.6080% ( 48) 00:11:01.894 9.304 - 9.356: 93.0586% ( 27) 00:11:01.894 9.356 - 9.407: 93.4423% ( 23) 00:11:01.894 9.407 - 9.459: 93.7594% ( 19) 00:11:01.894 9.459 - 9.510: 94.0431% ( 17) 00:11:01.894 9.510 - 9.561: 94.3267% ( 17) 00:11:01.894 9.561 - 9.613: 94.6271% ( 18) 00:11:01.894 9.613 - 9.664: 94.7606% ( 8) 00:11:01.894 9.664 - 9.716: 94.9107% ( 9) 00:11:01.894 9.716 - 9.767: 94.9942% ( 5) 00:11:01.894 9.767 - 9.818: 95.0776% ( 5) 00:11:01.894 9.818 - 9.870: 95.1276% ( 3) 00:11:01.894 9.870 - 9.921: 95.1777% ( 3) 00:11:01.894 9.921 - 9.973: 95.2445% ( 4) 00:11:01.894 9.973 - 10.024: 95.2778% ( 2) 00:11:01.894 10.178 - 10.230: 95.2945% ( 1) 00:11:01.894 10.230 - 10.281: 95.3279% ( 2) 00:11:01.894 10.333 - 10.384: 95.3946% ( 4) 00:11:01.894 10.384 - 10.435: 95.4280% ( 2) 00:11:01.894 10.435 - 10.487: 95.4781% ( 3) 00:11:01.894 10.487 - 10.538: 95.5782% ( 6) 00:11:01.894 10.538 - 10.590: 95.6449% ( 4) 00:11:01.894 10.590 - 10.641: 95.6783% ( 2) 00:11:01.894 10.641 - 10.692: 95.6950% ( 1) 00:11:01.894 10.692 - 10.744: 95.7283% ( 2) 00:11:01.894 10.744 - 10.795: 95.7617% ( 2) 00:11:01.894 10.795 - 10.847: 95.7951% ( 2) 00:11:01.894 10.898 - 10.949: 95.8285% ( 2) 00:11:01.894 11.001 - 11.052: 95.8452% ( 1) 00:11:01.894 11.052 - 11.104: 95.8618% ( 1) 00:11:01.894 11.104 - 11.155: 95.8952% ( 2) 00:11:01.894 11.155 - 11.206: 95.9119% ( 1) 00:11:01.894 11.206 - 11.258: 95.9620% ( 3) 00:11:01.894 11.361 - 11.412: 95.9786% ( 1) 00:11:01.894 11.875 - 11.926: 95.9953% ( 1) 00:11:01.894 11.926 - 11.978: 96.0120% ( 1) 00:11:01.894 12.235 - 12.286: 96.0287% ( 1) 00:11:01.894 12.543 - 12.594: 96.0454% ( 1) 00:11:01.894 13.365 - 13.468: 96.0621% ( 1) 00:11:01.894 13.571 - 13.674: 96.0954% ( 2) 00:11:01.894 13.674 - 13.777: 96.1789% ( 5) 00:11:01.894 13.777 - 13.880: 96.2122% ( 2) 00:11:01.894 13.880 - 13.982: 96.2790% ( 4) 00:11:01.894 13.982 - 14.085: 96.3124% ( 2) 00:11:01.894 14.085 - 14.188: 96.3624% ( 3) 00:11:01.894 14.188 - 14.291: 96.4292% ( 4) 00:11:01.894 14.291 - 14.394: 96.4625% ( 2) 00:11:01.894 14.394 - 14.496: 96.4792% ( 1) 00:11:01.894 14.496 - 14.599: 96.5627% ( 5) 00:11:01.894 14.599 - 14.702: 96.5793% ( 1) 00:11:01.894 14.702 - 14.805: 96.6461% ( 4) 00:11:01.894 14.805 - 14.908: 96.6961% ( 3) 00:11:01.894 14.908 - 15.010: 96.7462% ( 3) 00:11:01.894 15.010 - 15.113: 96.8129% ( 4) 00:11:01.894 15.113 - 15.216: 96.8463% ( 2) 00:11:01.894 15.216 - 15.319: 96.9965% ( 9) 00:11:01.894 15.319 - 15.422: 97.0299% ( 2) 00:11:01.894 15.422 - 15.524: 97.0632% ( 2) 00:11:01.894 15.524 - 15.627: 97.1133% ( 3) 00:11:01.894 15.627 - 15.730: 97.1300% ( 1) 00:11:01.894 15.730 - 15.833: 97.1634% ( 2) 00:11:01.894 15.936 - 16.039: 97.1967% ( 2) 00:11:01.894 16.039 - 16.141: 97.2301% ( 2) 00:11:01.894 16.141 - 16.244: 97.2635% ( 2) 00:11:01.894 16.347 - 16.450: 97.2802% ( 1) 00:11:01.894 16.553 - 16.655: 97.3469% ( 4) 00:11:01.894 16.758 - 16.861: 97.3803% ( 2) 00:11:01.894 16.861 - 16.964: 97.4804% ( 6) 00:11:01.894 16.964 - 17.067: 97.5305% ( 3) 00:11:01.894 17.067 - 17.169: 97.5972% ( 4) 00:11:01.894 17.169 - 17.272: 97.6639% ( 4) 00:11:01.894 17.272 - 17.375: 97.7474% ( 5) 00:11:01.894 17.375 - 17.478: 97.7974% ( 3) 00:11:01.894 17.478 - 17.581: 97.9142% ( 7) 00:11:01.894 17.581 - 17.684: 97.9643% ( 3) 00:11:01.894 17.684 - 17.786: 98.0811% ( 7) 00:11:01.894 17.786 - 17.889: 98.1812% ( 6) 00:11:01.894 17.889 - 17.992: 98.2646% ( 5) 00:11:01.894 17.992 - 18.095: 98.3648% ( 6) 00:11:01.894 18.095 - 18.198: 98.4816% ( 7) 00:11:01.894 18.198 - 18.300: 98.5650% ( 5) 00:11:01.894 18.300 - 18.403: 98.6651% ( 6) 00:11:01.894 18.403 - 18.506: 98.6985% ( 2) 00:11:01.894 18.506 - 18.609: 98.7819% ( 5) 00:11:01.894 18.609 - 18.712: 98.8487% ( 4) 00:11:01.894 18.712 - 18.814: 98.9154% ( 4) 00:11:01.894 18.814 - 18.917: 98.9988% ( 5) 00:11:01.894 18.917 - 19.020: 99.0823% ( 5) 00:11:01.894 19.020 - 19.123: 99.1991% ( 7) 00:11:01.894 19.123 - 19.226: 99.2324% ( 2) 00:11:01.894 19.226 - 19.329: 99.3326% ( 6) 00:11:01.895 19.329 - 19.431: 99.3993% ( 4) 00:11:01.895 19.431 - 19.534: 99.4494% ( 3) 00:11:01.895 19.534 - 19.637: 99.5161% ( 4) 00:11:01.895 19.740 - 19.843: 99.5495% ( 2) 00:11:01.895 19.843 - 19.945: 99.5828% ( 2) 00:11:01.895 19.945 - 20.048: 99.5995% ( 1) 00:11:01.895 20.048 - 20.151: 99.6162% ( 1) 00:11:01.895 20.151 - 20.254: 99.6830% ( 4) 00:11:01.895 20.254 - 20.357: 99.7330% ( 3) 00:11:01.895 20.357 - 20.459: 99.7497% ( 1) 00:11:01.895 20.973 - 21.076: 99.7664% ( 1) 00:11:01.895 21.179 - 21.282: 99.7831% ( 1) 00:11:01.895 21.796 - 21.899: 99.7998% ( 1) 00:11:01.895 22.721 - 22.824: 99.8165% ( 1) 00:11:01.895 22.927 - 23.030: 99.8331% ( 1) 00:11:01.895 23.030 - 23.133: 99.8498% ( 1) 00:11:01.895 24.058 - 24.161: 99.8665% ( 1) 00:11:01.895 25.394 - 25.497: 99.8832% ( 1) 00:11:01.895 27.965 - 28.170: 99.8999% ( 1) 00:11:01.895 29.610 - 29.815: 99.9166% ( 1) 00:11:01.895 31.255 - 31.460: 99.9333% ( 1) 00:11:01.895 34.133 - 34.339: 99.9499% ( 1) 00:11:01.895 42.358 - 42.564: 99.9666% ( 1) 00:11:01.895 57.163 - 57.574: 99.9833% ( 1) 00:11:01.895 106.101 - 106.924: 100.0000% ( 1) 00:11:01.895 00:11:01.895 ************************************ 00:11:01.895 END TEST nvme_overhead 00:11:01.895 ************************************ 00:11:01.895 00:11:01.895 real 0m1.310s 00:11:01.895 user 0m1.101s 00:11:01.895 sys 0m0.160s 00:11:01.895 08:23:49 nvme.nvme_overhead -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:01.895 08:23:49 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:01.895 08:23:49 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:01.895 08:23:49 nvme -- common/autotest_common.sh@1108 -- # '[' 6 -le 1 ']' 00:11:01.895 08:23:49 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:01.895 08:23:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.895 ************************************ 00:11:01.895 START TEST nvme_arbitration 00:11:01.895 ************************************ 00:11:01.895 08:23:49 nvme.nvme_arbitration -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:05.176 Initializing NVMe Controllers 00:11:05.176 Attached to 0000:00:10.0 00:11:05.176 Attached to 0000:00:11.0 00:11:05.176 Attached to 0000:00:13.0 00:11:05.176 Attached to 0000:00:12.0 00:11:05.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:05.176 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:05.176 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:05.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:05.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:05.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:05.176 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:05.176 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:05.176 Initialization complete. Launching workers. 00:11:05.176 Starting thread on core 1 with urgent priority queue 00:11:05.176 Starting thread on core 2 with urgent priority queue 00:11:05.176 Starting thread on core 3 with urgent priority queue 00:11:05.176 Starting thread on core 0 with urgent priority queue 00:11:05.176 QEMU NVMe Ctrl (12340 ) core 0: 341.33 IO/s 292.97 secs/100000 ios 00:11:05.176 QEMU NVMe Ctrl (12342 ) core 0: 341.33 IO/s 292.97 secs/100000 ios 00:11:05.176 QEMU NVMe Ctrl (12341 ) core 1: 362.67 IO/s 275.74 secs/100000 ios 00:11:05.176 QEMU NVMe Ctrl (12342 ) core 1: 362.67 IO/s 275.74 secs/100000 ios 00:11:05.176 QEMU NVMe Ctrl (12343 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:11:05.176 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:11:05.176 ======================================================== 00:11:05.176 00:11:05.435 ************************************ 00:11:05.435 END TEST nvme_arbitration 00:11:05.435 ************************************ 00:11:05.435 00:11:05.435 real 0m3.458s 00:11:05.435 user 0m9.406s 00:11:05.435 sys 0m0.178s 00:11:05.435 08:23:52 nvme.nvme_arbitration -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:05.435 08:23:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:05.435 08:23:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:05.435 08:23:52 nvme -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:11:05.435 08:23:52 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:05.435 08:23:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.435 ************************************ 00:11:05.435 START TEST nvme_single_aen 00:11:05.435 ************************************ 00:11:05.435 08:23:52 nvme.nvme_single_aen -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:05.693 Asynchronous Event Request test 00:11:05.693 Attached to 0000:00:10.0 00:11:05.693 Attached to 0000:00:11.0 00:11:05.693 Attached to 0000:00:13.0 00:11:05.693 Attached to 0000:00:12.0 00:11:05.693 Reset controller to setup AER completions for this process 00:11:05.693 Registering asynchronous event callbacks... 00:11:05.693 Getting orig temperature thresholds of all controllers 00:11:05.693 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.693 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.693 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.693 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:05.693 Setting all controllers temperature threshold low to trigger AER 00:11:05.693 Waiting for all controllers temperature threshold to be set lower 00:11:05.693 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.693 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:05.693 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.693 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:05.693 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.693 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:05.693 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:05.693 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:05.693 Waiting for all controllers to trigger AER and reset threshold 00:11:05.693 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.693 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.693 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.693 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:05.693 Cleaning up... 00:11:05.693 ************************************ 00:11:05.693 END TEST nvme_single_aen 00:11:05.693 ************************************ 00:11:05.693 00:11:05.693 real 0m0.308s 00:11:05.693 user 0m0.113s 00:11:05.693 sys 0m0.153s 00:11:05.693 08:23:53 nvme.nvme_single_aen -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:05.693 08:23:53 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:05.693 08:23:53 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:05.693 08:23:53 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:11:05.693 08:23:53 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:05.693 08:23:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.693 ************************************ 00:11:05.693 START TEST nvme_doorbell_aers 00:11:05.693 ************************************ 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1132 -- # nvme_doorbell_aers 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1486 -- # bdfs=() 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1486 -- # local bdfs 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:05.694 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:11:05.952 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:11:05.952 08:23:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:05.952 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:05.952 08:23:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:06.210 [2024-11-20 08:23:53.624611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:16.213 Executing: test_write_invalid_db 00:11:16.214 Waiting for AER completion... 00:11:16.214 Failure: test_write_invalid_db 00:11:16.214 00:11:16.214 Executing: test_invalid_db_write_overflow_sq 00:11:16.214 Waiting for AER completion... 00:11:16.214 Failure: test_invalid_db_write_overflow_sq 00:11:16.214 00:11:16.214 Executing: test_invalid_db_write_overflow_cq 00:11:16.214 Waiting for AER completion... 00:11:16.214 Failure: test_invalid_db_write_overflow_cq 00:11:16.214 00:11:16.214 08:24:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:16.214 08:24:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:16.214 [2024-11-20 08:24:03.690215] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:26.193 Executing: test_write_invalid_db 00:11:26.193 Waiting for AER completion... 00:11:26.193 Failure: test_write_invalid_db 00:11:26.193 00:11:26.193 Executing: test_invalid_db_write_overflow_sq 00:11:26.193 Waiting for AER completion... 00:11:26.193 Failure: test_invalid_db_write_overflow_sq 00:11:26.193 00:11:26.193 Executing: test_invalid_db_write_overflow_cq 00:11:26.193 Waiting for AER completion... 00:11:26.193 Failure: test_invalid_db_write_overflow_cq 00:11:26.193 00:11:26.193 08:24:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:26.193 08:24:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:26.193 [2024-11-20 08:24:13.727713] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:36.167 Executing: test_write_invalid_db 00:11:36.167 Waiting for AER completion... 00:11:36.167 Failure: test_write_invalid_db 00:11:36.167 00:11:36.167 Executing: test_invalid_db_write_overflow_sq 00:11:36.167 Waiting for AER completion... 00:11:36.167 Failure: test_invalid_db_write_overflow_sq 00:11:36.167 00:11:36.167 Executing: test_invalid_db_write_overflow_cq 00:11:36.167 Waiting for AER completion... 00:11:36.167 Failure: test_invalid_db_write_overflow_cq 00:11:36.167 00:11:36.167 08:24:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:36.167 08:24:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:36.425 [2024-11-20 08:24:23.766100] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 Executing: test_write_invalid_db 00:11:46.462 Waiting for AER completion... 00:11:46.462 Failure: test_write_invalid_db 00:11:46.462 00:11:46.462 Executing: test_invalid_db_write_overflow_sq 00:11:46.462 Waiting for AER completion... 00:11:46.462 Failure: test_invalid_db_write_overflow_sq 00:11:46.462 00:11:46.462 Executing: test_invalid_db_write_overflow_cq 00:11:46.462 Waiting for AER completion... 00:11:46.462 Failure: test_invalid_db_write_overflow_cq 00:11:46.462 00:11:46.462 ************************************ 00:11:46.462 END TEST nvme_doorbell_aers 00:11:46.462 ************************************ 00:11:46.462 00:11:46.462 real 0m40.339s 00:11:46.462 user 0m28.660s 00:11:46.462 sys 0m11.310s 00:11:46.462 08:24:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:46.462 08:24:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:46.462 08:24:33 nvme -- nvme/nvme.sh@97 -- # uname 00:11:46.462 08:24:33 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:46.462 08:24:33 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:46.462 08:24:33 nvme -- common/autotest_common.sh@1108 -- # '[' 6 -le 1 ']' 00:11:46.462 08:24:33 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:46.462 08:24:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.462 ************************************ 00:11:46.462 START TEST nvme_multi_aen 00:11:46.462 ************************************ 00:11:46.462 08:24:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:46.462 [2024-11-20 08:24:33.893147] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.893252] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.893274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.896909] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.896957] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.896972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.898485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.898528] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.898542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.899916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.899955] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 [2024-11-20 08:24:33.899968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64194) is not found. Dropping the request. 00:11:46.462 Child process pid: 64704 00:11:46.721 [Child] Asynchronous Event Request test 00:11:46.721 [Child] Attached to 0000:00:10.0 00:11:46.721 [Child] Attached to 0000:00:11.0 00:11:46.721 [Child] Attached to 0000:00:13.0 00:11:46.721 [Child] Attached to 0000:00:12.0 00:11:46.721 [Child] Registering asynchronous event callbacks... 00:11:46.721 [Child] Getting orig temperature thresholds of all controllers 00:11:46.721 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:46.721 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 [Child] Cleaning up... 00:11:46.721 Asynchronous Event Request test 00:11:46.721 Attached to 0000:00:10.0 00:11:46.721 Attached to 0000:00:11.0 00:11:46.721 Attached to 0000:00:13.0 00:11:46.721 Attached to 0000:00:12.0 00:11:46.721 Reset controller to setup AER completions for this process 00:11:46.721 Registering asynchronous event callbacks... 00:11:46.721 Getting orig temperature thresholds of all controllers 00:11:46.721 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:46.721 Setting all controllers temperature threshold low to trigger AER 00:11:46.721 Waiting for all controllers temperature threshold to be set lower 00:11:46.721 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:46.721 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:46.721 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:46.721 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:46.721 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:46.721 Waiting for all controllers to trigger AER and reset threshold 00:11:46.721 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.721 Cleaning up... 00:11:46.721 00:11:46.721 real 0m0.652s 00:11:46.721 user 0m0.231s 00:11:46.721 sys 0m0.313s 00:11:46.722 08:24:34 nvme.nvme_multi_aen -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:46.722 08:24:34 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:46.722 ************************************ 00:11:46.722 END TEST nvme_multi_aen 00:11:46.722 ************************************ 00:11:46.980 08:24:34 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:46.980 08:24:34 nvme -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:11:46.980 08:24:34 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:46.980 08:24:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.980 ************************************ 00:11:46.980 START TEST nvme_startup 00:11:46.980 ************************************ 00:11:46.980 08:24:34 nvme.nvme_startup -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:47.238 Initializing NVMe Controllers 00:11:47.238 Attached to 0000:00:10.0 00:11:47.238 Attached to 0000:00:11.0 00:11:47.238 Attached to 0000:00:13.0 00:11:47.238 Attached to 0000:00:12.0 00:11:47.238 Initialization complete. 00:11:47.238 Time used:196316.422 (us). 00:11:47.238 00:11:47.238 real 0m0.298s 00:11:47.238 user 0m0.109s 00:11:47.238 sys 0m0.147s 00:11:47.238 08:24:34 nvme.nvme_startup -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:47.238 ************************************ 00:11:47.238 END TEST nvme_startup 00:11:47.238 ************************************ 00:11:47.238 08:24:34 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:47.238 08:24:34 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:47.238 08:24:34 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:11:47.238 08:24:34 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:47.238 08:24:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.238 ************************************ 00:11:47.238 START TEST nvme_multi_secondary 00:11:47.238 ************************************ 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1132 -- # nvme_multi_secondary 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64760 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64761 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:47.238 08:24:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:51.426 Initializing NVMe Controllers 00:11:51.426 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:51.426 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:51.426 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:51.426 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:51.426 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:51.426 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:51.426 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:51.426 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:51.426 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:51.426 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:51.426 Initialization complete. Launching workers. 00:11:51.427 ======================================================== 00:11:51.427 Latency(us) 00:11:51.427 Device Information : IOPS MiB/s Average min max 00:11:51.427 PCIE (0000:00:10.0) NSID 1 from core 1: 4964.18 19.39 3220.78 1579.28 10472.05 00:11:51.427 PCIE (0000:00:11.0) NSID 1 from core 1: 4964.18 19.39 3222.67 1737.55 10655.85 00:11:51.427 PCIE (0000:00:13.0) NSID 1 from core 1: 4964.18 19.39 3222.86 1534.09 10658.79 00:11:51.427 PCIE (0000:00:12.0) NSID 1 from core 1: 4964.18 19.39 3223.00 1628.82 10719.97 00:11:51.427 PCIE (0000:00:12.0) NSID 2 from core 1: 4964.18 19.39 3223.13 1643.71 9567.65 00:11:51.427 PCIE (0000:00:12.0) NSID 3 from core 1: 4964.18 19.39 3223.30 1644.20 9916.91 00:11:51.427 ======================================================== 00:11:51.427 Total : 29785.10 116.35 3222.62 1534.09 10719.97 00:11:51.427 00:11:51.427 Initializing NVMe Controllers 00:11:51.427 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:51.427 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:51.427 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:51.427 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:51.427 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:51.427 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:51.427 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:51.427 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:51.427 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:51.427 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:51.427 Initialization complete. Launching workers. 00:11:51.427 ======================================================== 00:11:51.427 Latency(us) 00:11:51.427 Device Information : IOPS MiB/s Average min max 00:11:51.427 PCIE (0000:00:10.0) NSID 1 from core 2: 3359.74 13.12 4760.76 1273.24 10855.10 00:11:51.427 PCIE (0000:00:11.0) NSID 1 from core 2: 3359.74 13.12 4761.85 1095.16 10637.32 00:11:51.427 PCIE (0000:00:13.0) NSID 1 from core 2: 3359.74 13.12 4761.98 1172.76 11766.55 00:11:51.427 PCIE (0000:00:12.0) NSID 1 from core 2: 3359.74 13.12 4761.87 1246.43 11449.57 00:11:51.427 PCIE (0000:00:12.0) NSID 2 from core 2: 3359.74 13.12 4761.87 1188.46 10545.52 00:11:51.427 PCIE (0000:00:12.0) NSID 3 from core 2: 3359.74 13.12 4761.96 1192.07 11225.15 00:11:51.427 ======================================================== 00:11:51.427 Total : 20158.41 78.74 4761.71 1095.16 11766.55 00:11:51.427 00:11:51.427 08:24:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64760 00:11:52.801 Initializing NVMe Controllers 00:11:52.801 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:52.801 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:52.801 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:52.801 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:52.801 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:52.801 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:52.801 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:52.801 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:52.801 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:52.801 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:52.801 Initialization complete. Launching workers. 00:11:52.801 ======================================================== 00:11:52.801 Latency(us) 00:11:52.801 Device Information : IOPS MiB/s Average min max 00:11:52.801 PCIE (0000:00:10.0) NSID 1 from core 0: 8238.36 32.18 1940.63 927.36 6784.80 00:11:52.801 PCIE (0000:00:11.0) NSID 1 from core 0: 8238.36 32.18 1941.68 951.55 7060.60 00:11:52.801 PCIE (0000:00:13.0) NSID 1 from core 0: 8238.36 32.18 1941.64 880.07 7645.18 00:11:52.801 PCIE (0000:00:12.0) NSID 1 from core 0: 8238.36 32.18 1941.62 796.79 7779.52 00:11:52.801 PCIE (0000:00:12.0) NSID 2 from core 0: 8238.36 32.18 1941.60 803.02 7955.55 00:11:52.801 PCIE (0000:00:12.0) NSID 3 from core 0: 8241.56 32.19 1940.82 747.17 5592.13 00:11:52.801 ======================================================== 00:11:52.801 Total : 49433.35 193.10 1941.33 747.17 7955.55 00:11:52.801 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64761 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64836 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64837 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:52.801 08:24:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:56.084 Initializing NVMe Controllers 00:11:56.084 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:56.084 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:56.084 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:56.084 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:56.084 Initialization complete. Launching workers. 00:11:56.084 ======================================================== 00:11:56.084 Latency(us) 00:11:56.084 Device Information : IOPS MiB/s Average min max 00:11:56.084 PCIE (0000:00:10.0) NSID 1 from core 0: 5387.91 21.05 2967.45 949.43 5726.40 00:11:56.084 PCIE (0000:00:11.0) NSID 1 from core 0: 5387.91 21.05 2969.21 992.48 6083.73 00:11:56.084 PCIE (0000:00:13.0) NSID 1 from core 0: 5387.91 21.05 2969.52 985.89 6625.80 00:11:56.084 PCIE (0000:00:12.0) NSID 1 from core 0: 5387.91 21.05 2969.78 979.34 6350.19 00:11:56.084 PCIE (0000:00:12.0) NSID 2 from core 0: 5387.91 21.05 2969.92 983.35 6744.80 00:11:56.084 PCIE (0000:00:12.0) NSID 3 from core 0: 5393.24 21.07 2967.13 986.72 6149.60 00:11:56.084 ======================================================== 00:11:56.084 Total : 32332.80 126.30 2968.84 949.43 6744.80 00:11:56.084 00:11:56.084 Initializing NVMe Controllers 00:11:56.084 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:56.084 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:56.084 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:56.084 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:56.084 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:56.084 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:56.084 Initialization complete. Launching workers. 00:11:56.084 ======================================================== 00:11:56.084 Latency(us) 00:11:56.084 Device Information : IOPS MiB/s Average min max 00:11:56.084 PCIE (0000:00:10.0) NSID 1 from core 1: 5073.69 19.82 3151.09 1054.23 8555.57 00:11:56.084 PCIE (0000:00:11.0) NSID 1 from core 1: 5073.69 19.82 3152.72 1068.84 10240.52 00:11:56.084 PCIE (0000:00:13.0) NSID 1 from core 1: 5073.69 19.82 3152.66 1049.91 9743.85 00:11:56.084 PCIE (0000:00:12.0) NSID 1 from core 1: 5073.69 19.82 3152.60 1059.91 10038.21 00:11:56.084 PCIE (0000:00:12.0) NSID 2 from core 1: 5073.69 19.82 3152.56 1059.29 10092.74 00:11:56.084 PCIE (0000:00:12.0) NSID 3 from core 1: 5073.69 19.82 3152.52 934.28 8743.55 00:11:56.084 ======================================================== 00:11:56.084 Total : 30442.14 118.91 3152.36 934.28 10240.52 00:11:56.084 00:11:57.985 Initializing NVMe Controllers 00:11:57.985 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:57.985 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:57.985 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:57.985 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:57.985 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:57.985 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:57.985 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:57.985 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:57.985 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:57.985 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:57.985 Initialization complete. Launching workers. 00:11:57.985 ======================================================== 00:11:57.985 Latency(us) 00:11:57.985 Device Information : IOPS MiB/s Average min max 00:11:57.985 PCIE (0000:00:10.0) NSID 1 from core 2: 3287.13 12.84 4865.90 1038.32 14843.89 00:11:57.985 PCIE (0000:00:11.0) NSID 1 from core 2: 3287.13 12.84 4867.31 1041.01 12462.12 00:11:57.985 PCIE (0000:00:13.0) NSID 1 from core 2: 3287.13 12.84 4866.98 1048.60 13625.99 00:11:57.985 PCIE (0000:00:12.0) NSID 1 from core 2: 3287.13 12.84 4866.84 1036.98 13039.83 00:11:57.985 PCIE (0000:00:12.0) NSID 2 from core 2: 3287.13 12.84 4867.05 1065.65 13155.41 00:11:57.985 PCIE (0000:00:12.0) NSID 3 from core 2: 3287.13 12.84 4866.73 1057.02 13918.75 00:11:57.985 ======================================================== 00:11:57.985 Total : 19722.76 77.04 4866.80 1036.98 14843.89 00:11:57.985 00:11:57.985 ************************************ 00:11:57.985 END TEST nvme_multi_secondary 00:11:57.985 ************************************ 00:11:57.985 08:24:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64836 00:11:57.985 08:24:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64837 00:11:57.985 00:11:57.985 real 0m10.821s 00:11:57.985 user 0m18.536s 00:11:57.985 sys 0m1.084s 00:11:57.985 08:24:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1133 -- # xtrace_disable 00:11:57.985 08:24:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:58.244 08:24:45 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:58.244 08:24:45 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1096 -- # [[ -e /proc/63769 ]] 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1097 -- # kill 63769 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1098 -- # wait 63769 00:11:58.244 [2024-11-20 08:24:45.582875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.583357] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.583449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.583504] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.590006] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.590082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.590112] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.590144] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.594928] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.595024] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.595054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.595086] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.598213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.598263] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.598283] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 [2024-11-20 08:24:45.598305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64703) is not found. Dropping the request. 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1100 -- # rm -f /var/run/spdk_stub0 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1104 -- # echo 2 00:11:58.244 08:24:45 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:11:58.244 08:24:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.244 ************************************ 00:11:58.244 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:58.244 ************************************ 00:11:58.244 08:24:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:58.503 * Looking for test storage... 00:11:58.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:58.503 08:24:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:11:58.503 08:24:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1638 -- # lcov --version 00:11:58.503 08:24:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.503 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:11:58.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.503 --rc genhtml_branch_coverage=1 00:11:58.504 --rc genhtml_function_coverage=1 00:11:58.504 --rc genhtml_legend=1 00:11:58.504 --rc geninfo_all_blocks=1 00:11:58.504 --rc geninfo_unexecuted_blocks=1 00:11:58.504 00:11:58.504 ' 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:11:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.504 --rc genhtml_branch_coverage=1 00:11:58.504 --rc genhtml_function_coverage=1 00:11:58.504 --rc genhtml_legend=1 00:11:58.504 --rc geninfo_all_blocks=1 00:11:58.504 --rc geninfo_unexecuted_blocks=1 00:11:58.504 00:11:58.504 ' 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:11:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.504 --rc genhtml_branch_coverage=1 00:11:58.504 --rc genhtml_function_coverage=1 00:11:58.504 --rc genhtml_legend=1 00:11:58.504 --rc geninfo_all_blocks=1 00:11:58.504 --rc geninfo_unexecuted_blocks=1 00:11:58.504 00:11:58.504 ' 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:11:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.504 --rc genhtml_branch_coverage=1 00:11:58.504 --rc genhtml_function_coverage=1 00:11:58.504 --rc genhtml_legend=1 00:11:58.504 --rc geninfo_all_blocks=1 00:11:58.504 --rc geninfo_unexecuted_blocks=1 00:11:58.504 00:11:58.504 ' 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=() 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # local bdfs 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=($(get_nvme_bdfs)) 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # get_nvme_bdfs 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1486 -- # bdfs=() 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1486 -- # local bdfs 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:11:58.504 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # echo 0000:00:10.0 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65009 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65009 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # '[' -z 65009 ']' 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@843 -- # local max_retries=100 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@847 -- # xtrace_disable 00:11:58.763 08:24:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:58.763 [2024-11-20 08:24:46.293190] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:11:58.763 [2024-11-20 08:24:46.293576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65009 ] 00:11:59.023 [2024-11-20 08:24:46.509541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.281 [2024-11-20 08:24:46.633932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.281 [2024-11-20 08:24:46.633956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.281 [2024-11-20 08:24:46.634127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.281 [2024-11-20 08:24:46.634160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@871 -- # return 0 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:00.218 nvme0n1 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_F1w6V.txt 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:00.218 true 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732091087 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65032 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:00.218 08:24:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:02.120 [2024-11-20 08:24:49.657328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:02.120 [2024-11-20 08:24:49.657640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:02.120 [2024-11-20 08:24:49.657668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:02.120 [2024-11-20 08:24:49.657685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.120 [2024-11-20 08:24:49.659716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:02.120 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65032 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65032 00:12:02.120 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65032 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_F1w6V.txt 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_F1w6V.txt 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65009 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' -z 65009 ']' 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@961 -- # kill -0 65009 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # uname 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 65009 00:12:02.379 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:12:02.380 killing process with pid 65009 00:12:02.380 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:12:02.380 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@975 -- # echo 'killing process with pid 65009' 00:12:02.380 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # kill 65009 00:12:02.380 08:24:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@981 -- # wait 65009 00:12:04.959 08:24:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:04.959 08:24:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:04.959 00:12:04.959 real 0m6.473s 00:12:04.959 user 0m22.443s 00:12:04.959 sys 0m0.837s 00:12:04.959 08:24:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:04.959 ************************************ 00:12:04.959 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:04.959 ************************************ 00:12:04.959 08:24:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:04.959 08:24:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:04.959 08:24:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:04.959 08:24:52 nvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:04.959 08:24:52 nvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:04.959 08:24:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.959 ************************************ 00:12:04.959 START TEST nvme_fio 00:12:04.959 ************************************ 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1132 -- # nvme_fio_test 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1486 -- # bdfs=() 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1486 -- # local bdfs 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:12:04.959 08:24:52 nvme.nvme_fio -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:04.959 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:05.217 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:05.217 08:24:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:05.475 08:24:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:05.475 08:24:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # local sanitizers 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # shift 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local asan_lib= 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # grep libasan 00:12:05.475 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:12:05.733 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:05.733 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:05.733 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # break 00:12:05.733 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:05.733 08:24:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:05.733 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:05.733 fio-3.35 00:12:05.733 Starting 1 thread 00:12:09.921 00:12:09.921 test: (groupid=0, jobs=1): err= 0: pid=65188: Wed Nov 20 08:24:56 2024 00:12:09.921 read: IOPS=22.3k, BW=87.0MiB/s (91.3MB/s)(174MiB/2001msec) 00:12:09.921 slat (nsec): min=3739, max=51359, avg=4366.00, stdev=1376.99 00:12:09.921 clat (usec): min=222, max=10435, avg=2862.09, stdev=641.06 00:12:09.921 lat (usec): min=226, max=10487, avg=2866.46, stdev=641.95 00:12:09.921 clat percentiles (usec): 00:12:09.921 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:12:09.921 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2769], 00:12:09.921 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3195], 00:12:09.921 | 99.00th=[ 6390], 99.50th=[ 8356], 99.90th=[ 9241], 99.95th=[ 9241], 00:12:09.921 | 99.99th=[10159] 00:12:09.921 bw ( KiB/s): min=82728, max=93240, per=98.10%, avg=87432.00, stdev=5342.25, samples=3 00:12:09.921 iops : min=20682, max=23310, avg=21858.00, stdev=1335.56, samples=3 00:12:09.921 write: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec); 0 zone resets 00:12:09.921 slat (usec): min=3, max=147, avg= 4.81, stdev= 1.57 00:12:09.921 clat (usec): min=214, max=10261, avg=2876.73, stdev=672.00 00:12:09.921 lat (usec): min=218, max=10283, avg=2881.54, stdev=672.89 00:12:09.921 clat percentiles (usec): 00:12:09.921 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:12:09.921 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:12:09.921 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3228], 00:12:09.921 | 99.00th=[ 6980], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[ 9241], 00:12:09.921 | 99.99th=[ 9896] 00:12:09.921 bw ( KiB/s): min=82784, max=93872, per=98.97%, avg=87605.33, stdev=5683.54, samples=3 00:12:09.921 iops : min=20696, max=23468, avg=21901.33, stdev=1420.89, samples=3 00:12:09.921 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:09.921 lat (msec) : 2=0.16%, 4=96.93%, 10=2.86%, 20=0.01% 00:12:09.921 cpu : usr=99.35%, sys=0.10%, ctx=2, majf=0, minf=607 00:12:09.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:09.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.921 issued rwts: total=44586,44281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.921 00:12:09.921 Run status group 0 (all jobs): 00:12:09.921 READ: bw=87.0MiB/s (91.3MB/s), 87.0MiB/s-87.0MiB/s (91.3MB/s-91.3MB/s), io=174MiB (183MB), run=2001-2001msec 00:12:09.921 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:12:09.922 ----------------------------------------------------- 00:12:09.922 Suppressions used: 00:12:09.922 count bytes template 00:12:09.922 1 32 /usr/src/fio/parse.c 00:12:09.922 1 8 libtcmalloc_minimal.so 00:12:09.922 ----------------------------------------------------- 00:12:09.922 00:12:09.922 08:24:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:09.922 08:24:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:09.922 08:24:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:09.922 08:24:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:09.922 08:24:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:09.922 08:24:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:10.196 08:24:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:10.196 08:24:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # local sanitizers 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # shift 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local asan_lib= 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # grep libasan 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # break 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:10.196 08:24:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:10.196 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:10.196 fio-3.35 00:12:10.196 Starting 1 thread 00:12:14.420 00:12:14.420 test: (groupid=0, jobs=1): err= 0: pid=65254: Wed Nov 20 08:25:01 2024 00:12:14.420 read: IOPS=20.3k, BW=79.3MiB/s (83.2MB/s)(159MiB/2001msec) 00:12:14.420 slat (nsec): min=3859, max=43636, avg=5056.09, stdev=1192.48 00:12:14.420 clat (usec): min=258, max=11186, avg=3136.29, stdev=314.20 00:12:14.420 lat (usec): min=263, max=11230, avg=3141.35, stdev=314.46 00:12:14.420 clat percentiles (usec): 00:12:14.420 | 1.00th=[ 2769], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 2999], 00:12:14.420 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3130], 00:12:14.420 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3294], 95.00th=[ 3392], 00:12:14.420 | 99.00th=[ 3949], 99.50th=[ 4490], 99.90th=[ 7832], 99.95th=[ 8848], 00:12:14.420 | 99.99th=[10814] 00:12:14.420 bw ( KiB/s): min=78544, max=82120, per=98.80%, avg=80253.33, stdev=1793.18, samples=3 00:12:14.420 iops : min=19636, max=20530, avg=20063.33, stdev=448.30, samples=3 00:12:14.420 write: IOPS=20.3k, BW=79.1MiB/s (83.0MB/s)(158MiB/2001msec); 0 zone resets 00:12:14.420 slat (nsec): min=3965, max=37834, avg=5355.51, stdev=1297.84 00:12:14.420 clat (usec): min=207, max=10922, avg=3146.75, stdev=321.66 00:12:14.420 lat (usec): min=212, max=10940, avg=3152.10, stdev=321.90 00:12:14.420 clat percentiles (usec): 00:12:14.420 | 1.00th=[ 2802], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:12:14.420 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:12:14.420 | 70.00th=[ 3195], 80.00th=[ 3228], 90.00th=[ 3294], 95.00th=[ 3392], 00:12:14.420 | 99.00th=[ 3982], 99.50th=[ 4490], 99.90th=[ 8225], 99.95th=[ 8979], 00:12:14.420 | 99.99th=[10552] 00:12:14.420 bw ( KiB/s): min=78376, max=81872, per=99.04%, avg=80269.33, stdev=1766.03, samples=3 00:12:14.420 iops : min=19594, max=20468, avg=20067.33, stdev=441.51, samples=3 00:12:14.420 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:14.420 lat (msec) : 2=0.13%, 4=98.85%, 10=0.95%, 20=0.02% 00:12:14.420 cpu : usr=99.35%, sys=0.10%, ctx=5, majf=0, minf=607 00:12:14.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:14.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.420 issued rwts: total=40635,40543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.420 00:12:14.420 Run status group 0 (all jobs): 00:12:14.420 READ: bw=79.3MiB/s (83.2MB/s), 79.3MiB/s-79.3MiB/s (83.2MB/s-83.2MB/s), io=159MiB (166MB), run=2001-2001msec 00:12:14.420 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=158MiB (166MB), run=2001-2001msec 00:12:14.420 ----------------------------------------------------- 00:12:14.420 Suppressions used: 00:12:14.420 count bytes template 00:12:14.420 1 32 /usr/src/fio/parse.c 00:12:14.420 1 8 libtcmalloc_minimal.so 00:12:14.420 ----------------------------------------------------- 00:12:14.420 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:14.420 08:25:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:14.680 08:25:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:14.680 08:25:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # local sanitizers 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # shift 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local asan_lib= 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # grep libasan 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # break 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:14.680 08:25:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:14.938 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:14.938 fio-3.35 00:12:14.938 Starting 1 thread 00:12:19.136 00:12:19.136 test: (groupid=0, jobs=1): err= 0: pid=65320: Wed Nov 20 08:25:05 2024 00:12:19.136 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec) 00:12:19.136 slat (nsec): min=3833, max=55456, avg=5073.28, stdev=1194.75 00:12:19.136 clat (usec): min=266, max=11477, avg=3042.52, stdev=329.27 00:12:19.136 lat (usec): min=270, max=11532, avg=3047.59, stdev=329.79 00:12:19.136 clat percentiles (usec): 00:12:19.136 | 1.00th=[ 2671], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:12:19.136 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3032], 00:12:19.136 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3261], 00:12:19.136 | 99.00th=[ 4228], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 8979], 00:12:19.136 | 99.99th=[11076] 00:12:19.136 bw ( KiB/s): min=82896, max=84016, per=99.56%, avg=83520.00, stdev=570.87, samples=3 00:12:19.136 iops : min=20724, max=21004, avg=20880.00, stdev=142.72, samples=3 00:12:19.136 write: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:12:19.136 slat (nsec): min=4076, max=33796, avg=5453.45, stdev=1233.18 00:12:19.136 clat (usec): min=239, max=11178, avg=3048.88, stdev=335.40 00:12:19.136 lat (usec): min=244, max=11197, avg=3054.33, stdev=335.87 00:12:19.137 clat percentiles (usec): 00:12:19.137 | 1.00th=[ 2671], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2933], 00:12:19.137 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:12:19.137 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3261], 00:12:19.137 | 99.00th=[ 4359], 99.50th=[ 5211], 99.90th=[ 7242], 99.95th=[ 9241], 00:12:19.137 | 99.99th=[10814] 00:12:19.137 bw ( KiB/s): min=82920, max=84200, per=100.00%, avg=83597.33, stdev=643.26, samples=3 00:12:19.137 iops : min=20730, max=21050, avg=20899.33, stdev=160.81, samples=3 00:12:19.137 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:19.137 lat (msec) : 2=0.05%, 4=98.55%, 10=1.35%, 20=0.03% 00:12:19.137 cpu : usr=99.50%, sys=0.00%, ctx=5, majf=0, minf=608 00:12:19.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:19.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.137 issued rwts: total=41964,41740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.137 00:12:19.137 Run status group 0 (all jobs): 00:12:19.137 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:12:19.137 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:12:19.137 ----------------------------------------------------- 00:12:19.137 Suppressions used: 00:12:19.137 count bytes template 00:12:19.137 1 32 /usr/src/fio/parse.c 00:12:19.137 1 8 libtcmalloc_minimal.so 00:12:19.137 ----------------------------------------------------- 00:12:19.137 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:19.137 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:19.396 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:19.396 08:25:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # local sanitizers 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # shift 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local asan_lib= 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # grep libasan 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # break 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:19.396 08:25:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:19.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:19.396 fio-3.35 00:12:19.396 Starting 1 thread 00:12:24.666 00:12:24.666 test: (groupid=0, jobs=1): err= 0: pid=65381: Wed Nov 20 08:25:11 2024 00:12:24.666 read: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(185MiB/2001msec) 00:12:24.666 slat (nsec): min=3744, max=48956, avg=4250.82, stdev=986.77 00:12:24.666 clat (usec): min=329, max=10777, avg=2690.44, stdev=262.57 00:12:24.666 lat (usec): min=334, max=10826, avg=2694.69, stdev=262.98 00:12:24.666 clat percentiles (usec): 00:12:24.666 | 1.00th=[ 2343], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:12:24.666 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:12:24.666 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2868], 95.00th=[ 2933], 00:12:24.666 | 99.00th=[ 3425], 99.50th=[ 3785], 99.90th=[ 5932], 99.95th=[ 7570], 00:12:24.666 | 99.99th=[10552] 00:12:24.666 bw ( KiB/s): min=92215, max=96088, per=99.67%, avg=94597.00, stdev=2084.57, samples=3 00:12:24.666 iops : min=23053, max=24022, avg=23649.00, stdev=521.57, samples=3 00:12:24.666 write: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(184MiB/2001msec); 0 zone resets 00:12:24.666 slat (nsec): min=3865, max=32445, avg=4617.67, stdev=992.95 00:12:24.666 clat (usec): min=168, max=10579, avg=2695.50, stdev=267.51 00:12:24.666 lat (usec): min=172, max=10602, avg=2700.12, stdev=267.85 00:12:24.666 clat percentiles (usec): 00:12:24.666 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:12:24.666 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:12:24.666 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:12:24.666 | 99.00th=[ 3458], 99.50th=[ 3818], 99.90th=[ 6128], 99.95th=[ 7963], 00:12:24.666 | 99.99th=[10159] 00:12:24.666 bw ( KiB/s): min=92039, max=96520, per=100.00%, avg=94578.33, stdev=2299.51, samples=3 00:12:24.666 iops : min=23009, max=24130, avg=23644.33, stdev=575.29, samples=3 00:12:24.666 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:24.666 lat (msec) : 2=0.06%, 4=99.58%, 10=0.31%, 20=0.01% 00:12:24.666 cpu : usr=99.45%, sys=0.05%, ctx=4, majf=0, minf=605 00:12:24.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:24.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.666 issued rwts: total=47480,47189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.666 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.666 00:12:24.666 Run status group 0 (all jobs): 00:12:24.666 READ: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=185MiB (194MB), run=2001-2001msec 00:12:24.666 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=184MiB (193MB), run=2001-2001msec 00:12:24.666 ----------------------------------------------------- 00:12:24.666 Suppressions used: 00:12:24.666 count bytes template 00:12:24.666 1 32 /usr/src/fio/parse.c 00:12:24.666 1 8 libtcmalloc_minimal.so 00:12:24.666 ----------------------------------------------------- 00:12:24.666 00:12:24.666 ************************************ 00:12:24.666 END TEST nvme_fio 00:12:24.666 ************************************ 00:12:24.666 08:25:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:24.666 08:25:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:24.666 00:12:24.666 real 0m19.347s 00:12:24.666 user 0m15.525s 00:12:24.666 sys 0m2.707s 00:12:24.666 08:25:11 nvme.nvme_fio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:24.666 08:25:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:24.666 ************************************ 00:12:24.666 END TEST nvme 00:12:24.666 ************************************ 00:12:24.666 00:12:24.666 real 1m34.841s 00:12:24.666 user 3m43.275s 00:12:24.666 sys 0m22.795s 00:12:24.666 08:25:11 nvme -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:24.666 08:25:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.666 08:25:11 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:24.666 08:25:11 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:24.666 08:25:11 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:24.666 08:25:11 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:24.666 08:25:11 -- common/autotest_common.sh@10 -- # set +x 00:12:24.666 ************************************ 00:12:24.666 START TEST nvme_scc 00:12:24.666 ************************************ 00:12:24.666 08:25:11 nvme_scc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:24.666 * Looking for test storage... 00:12:24.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:24.666 08:25:11 nvme_scc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:12:24.666 08:25:11 nvme_scc -- common/autotest_common.sh@1638 -- # lcov --version 00:12:24.666 08:25:11 nvme_scc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:12:24.666 08:25:12 nvme_scc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.666 08:25:12 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:24.666 08:25:12 nvme_scc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.666 08:25:12 nvme_scc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:12:24.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.666 --rc genhtml_branch_coverage=1 00:12:24.666 --rc genhtml_function_coverage=1 00:12:24.666 --rc genhtml_legend=1 00:12:24.666 --rc geninfo_all_blocks=1 00:12:24.666 --rc geninfo_unexecuted_blocks=1 00:12:24.666 00:12:24.666 ' 00:12:24.666 08:25:12 nvme_scc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:12:24.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.666 --rc genhtml_branch_coverage=1 00:12:24.666 --rc genhtml_function_coverage=1 00:12:24.667 --rc genhtml_legend=1 00:12:24.667 --rc geninfo_all_blocks=1 00:12:24.667 --rc geninfo_unexecuted_blocks=1 00:12:24.667 00:12:24.667 ' 00:12:24.667 08:25:12 nvme_scc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:12:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.667 --rc genhtml_branch_coverage=1 00:12:24.667 --rc genhtml_function_coverage=1 00:12:24.667 --rc genhtml_legend=1 00:12:24.667 --rc geninfo_all_blocks=1 00:12:24.667 --rc geninfo_unexecuted_blocks=1 00:12:24.667 00:12:24.667 ' 00:12:24.667 08:25:12 nvme_scc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:12:24.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.667 --rc genhtml_branch_coverage=1 00:12:24.667 --rc genhtml_function_coverage=1 00:12:24.667 --rc genhtml_legend=1 00:12:24.667 --rc geninfo_all_blocks=1 00:12:24.667 --rc geninfo_unexecuted_blocks=1 00:12:24.667 00:12:24.667 ' 00:12:24.667 08:25:12 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.667 08:25:12 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.667 08:25:12 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.667 08:25:12 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.667 08:25:12 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.667 08:25:12 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.667 08:25:12 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.667 08:25:12 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.667 08:25:12 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:24.667 08:25:12 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/sync/functions.sh 00:12:24.667 08:25:12 nvme_scc -- sync/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/sync/functions.sh 00:12:24.667 08:25:12 nvme_scc -- sync/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/sync/../../../ 00:12:24.667 08:25:12 nvme_scc -- sync/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@11 -- # ctrls_g=() 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@11 -- # declare -A ctrls_g 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@12 -- # nvmes_g=() 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@12 -- # declare -A nvmes_g 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@13 -- # bdfs_g=() 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@13 -- # declare -A bdfs_g 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@14 -- # ordered_ctrls_g=() 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@14 -- # declare -a ordered_ctrls_g 00:12:24.667 08:25:12 nvme_scc -- nvme/functions.sh@16 -- # nvme_name= 00:12:24.667 08:25:12 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.667 08:25:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:24.667 08:25:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:24.667 08:25:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:24.667 08:25:12 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:25.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.492 Waiting for block devices as requested 00:12:25.492 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.750 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.750 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.750 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:31.026 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:31.026 08:25:18 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@51 -- # pci=0000:00:11.0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@52 -- # pci_can_use 0000:00:11.0 00:12:31.026 08:25:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:31.026 08:25:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:31.026 08:25:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.026 08:25:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x1b36"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[vid]=0x1b36 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x1af4 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 12341 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="12341 "' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[sn]='12341 ' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fr]='8.0.0 ' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="6"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rab]=6 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="525400"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ieee]=525400 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="7"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mdts]=7 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[cntlid]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x100"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[oaes]=0x100 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x8000 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.026 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[nvmsr]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mec]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x12a"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[oacs]=0x12a 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="3"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[acl]=3 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x3"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[frmw]=0x3 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0x7"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[lpa]=0x7 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[elpe]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[npss]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[avscc]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="343"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[wctemp]=343 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="373"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[cctemp]=373 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[edstt]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[dsto]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[sanicap]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.027 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[maxcmd]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="256"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[nn]=256 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x15d"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[oncs]=0x15d 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fna]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x7"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[vwc]=0x7 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[awun]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[awupf]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[icsvscc]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0x3"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ocfs]=0x3 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0x1"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[sgls]=0x1 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.028 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="-"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0[active_power_workload]=- 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0x140000 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0x140000 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x140000 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x14 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=7 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0x4 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0x3"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[mc]=0x3 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0x1f 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=1 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nawun]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="0"' 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[npwg]=0 00:12:31.029 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[npwa]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[npdg]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[npda]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nows]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="128"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=128 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="128"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[mcl]=128 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="127"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[msrc]=127 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:11.0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@51 -- # pci=0000:00:10.0 00:12:31.030 08:25:18 nvme_scc -- nvme/functions.sh@52 -- # pci_can_use 0000:00:10.0 00:12:31.030 08:25:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:31.030 08:25:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:31.030 08:25:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.030 08:25:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@53 -- # ctrl_dev=nvme1 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@54 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme1 reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme1=()' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[vid]="0x1b36"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[vid]=0x1b36 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ssvid]=0x1af4 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 12340 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[sn]="12340 "' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[sn]='12340 ' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fr]='8.0.0 ' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rab]="6"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rab]=6 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ieee]="525400"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ieee]=525400 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[cmic]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[cmic]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mdts]="7"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mdts]=7 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[cntlid]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[cntlid]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ver]="0x10400"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ver]=0x10400 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3r]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rtd3r]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3e]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rtd3e]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[oaes]="0x100"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[oaes]=0x100 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ctratt]=0x8000 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rrls]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rrls]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[cntrltype]="1"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[cntrltype]=1 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[crdt1]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[crdt1]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[crdt2]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[crdt2]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[crdt3]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[crdt3]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[nvmsr]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[nvmsr]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[vwci]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[vwci]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mec]="0"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mec]=0 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[oacs]="0x12a"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[oacs]=0x12a 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[acl]="3"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[acl]=3 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[aerl]="3"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[aerl]=3 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[frmw]="0x3"' 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[frmw]=0x3 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.031 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[lpa]="0x7"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[lpa]=0x7 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[elpe]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[elpe]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[npss]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[npss]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[avscc]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[avscc]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[apsta]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[apsta]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[wctemp]="343"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[wctemp]=343 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[cctemp]="373"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[cctemp]=373 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mtfa]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mtfa]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[hmpre]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[hmpre]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[hmmin]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[hmmin]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[tnvmcap]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[tnvmcap]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[unvmcap]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[unvmcap]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rpmbs]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rpmbs]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[edstt]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[edstt]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[dsto]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[dsto]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fwug]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fwug]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[kas]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[kas]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[hctma]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[hctma]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mntmt]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mntmt]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mxtmt]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mxtmt]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[sanicap]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[sanicap]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[hmminds]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[hmminds]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[hmmaxd]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[hmmaxd]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[nsetidmax]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[nsetidmax]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[endgidmax]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[endgidmax]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[anatt]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[anatt]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[anacap]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[anacap]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[anagrpmax]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[anagrpmax]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[nanagrpid]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[nanagrpid]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[pels]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[pels]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[domainid]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[domainid]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[megcap]="0"' 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[megcap]=0 00:12:31.032 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[sqes]="0x66"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[sqes]=0x66 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[cqes]="0x44"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[cqes]=0x44 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[maxcmd]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[maxcmd]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[nn]="256"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[nn]=256 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[oncs]="0x15d"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[oncs]=0x15d 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fuses]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fuses]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fna]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fna]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[vwc]="0x7"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[vwc]=0x7 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[awun]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[awun]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[awupf]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[awupf]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[icsvscc]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[icsvscc]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[nwpc]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[nwpc]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[acwu]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[acwu]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ocfs]="0x3"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ocfs]=0x3 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[sgls]="0x1"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[sgls]=0x1 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[mnan]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[mnan]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[maxdna]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[maxdna]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[maxcna]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[maxcna]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ioccsz]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ioccsz]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[iorcsz]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[iorcsz]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[icdoff]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[icdoff]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[fcatt]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[fcatt]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[msdbd]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[msdbd]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ofcs]="0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ofcs]=0 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1[active_power_workload]="-"' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1[active_power_workload]=- 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme1_ns 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@58 -- # ns_dev=nvme1n1 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@59 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme1n1 reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme1n1=()' 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:31.033 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nsze]=0x17a17a 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[ncap]=0x17a17a 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nuse]=0x17a17a 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nsfeat]=0x14 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nlbaf]=7 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[flbas]=0x7 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[mc]="0x3"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[mc]=0x3 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[dpc]=0x1f 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[dps]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[dps]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nmic]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nmic]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[rescap]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[rescap]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[fpi]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[fpi]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[dlfeat]=1 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawun]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nawun]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawupf]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nawupf]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nacwu]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nacwu]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabsn]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nabsn]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabo]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nabo]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabspf]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nabspf]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[noiob]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[noiob]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nvmcap]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwg]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[npwg]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwa]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[npwa]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[npdg]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[npdg]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[npda]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[npda]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nows]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nows]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[mssrl]="128"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[mssrl]=128 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[mcl]="128"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[mcl]=128 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[msrc]="127"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[msrc]=127 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nulbaf]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[anagrpid]=0 00:12:31.034 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsattr]="0"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nsattr]=0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nvmsetid]=0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[endgid]="0"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[endgid]=0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[eui64]=0000000000000000 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme1 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme1_ns 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:10.0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme1 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@51 -- # pci=0000:00:12.0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@52 -- # pci_can_use 0000:00:12.0 00:12:31.035 08:25:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:31.035 08:25:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:31.035 08:25:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.035 08:25:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@53 -- # ctrl_dev=nvme2 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@54 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme2 reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme2=()' 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.035 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[vid]="0x1b36"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[vid]=0x1b36 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ssvid]=0x1af4 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 12342 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[sn]="12342 "' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[sn]='12342 ' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fr]='8.0.0 ' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rab]="6"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rab]=6 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ieee]="525400"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ieee]=525400 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[cmic]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[cmic]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mdts]="7"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mdts]=7 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[cntlid]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[cntlid]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ver]="0x10400"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ver]=0x10400 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3r]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rtd3r]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3e]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rtd3e]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[oaes]="0x100"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[oaes]=0x100 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ctratt]=0x8000 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rrls]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rrls]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[cntrltype]="1"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[cntrltype]=1 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[crdt1]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[crdt1]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[crdt2]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[crdt2]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[crdt3]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[crdt3]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[nvmsr]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[nvmsr]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[vwci]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[vwci]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mec]="0"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mec]=0 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[oacs]="0x12a"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[oacs]=0x12a 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[acl]="3"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[acl]=3 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[aerl]="3"' 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[aerl]=3 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.300 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[frmw]="0x3"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[frmw]=0x3 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[lpa]="0x7"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[lpa]=0x7 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[elpe]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[elpe]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[npss]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[npss]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[avscc]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[avscc]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[apsta]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[apsta]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[wctemp]="343"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[wctemp]=343 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[cctemp]="373"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[cctemp]=373 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mtfa]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mtfa]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[hmpre]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[hmpre]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[hmmin]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[hmmin]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[tnvmcap]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[tnvmcap]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[unvmcap]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[unvmcap]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rpmbs]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rpmbs]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[edstt]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[edstt]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[dsto]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[dsto]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fwug]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fwug]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[kas]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[kas]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[hctma]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[hctma]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mntmt]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mntmt]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mxtmt]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mxtmt]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[sanicap]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[sanicap]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[hmminds]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[hmminds]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[hmmaxd]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[hmmaxd]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[nsetidmax]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[nsetidmax]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[endgidmax]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[endgidmax]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[anatt]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[anatt]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[anacap]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[anacap]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[anagrpmax]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[anagrpmax]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[nanagrpid]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[nanagrpid]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[pels]="0"' 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[pels]=0 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.301 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[domainid]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[domainid]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[megcap]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[megcap]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[sqes]="0x66"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[sqes]=0x66 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[cqes]="0x44"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[cqes]=0x44 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[maxcmd]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[maxcmd]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[nn]="256"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[nn]=256 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[oncs]="0x15d"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[oncs]=0x15d 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fuses]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fuses]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fna]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fna]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[vwc]="0x7"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[vwc]=0x7 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[awun]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[awun]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[awupf]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[awupf]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[icsvscc]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[icsvscc]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[nwpc]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[nwpc]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[acwu]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[acwu]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ocfs]="0x3"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ocfs]=0x3 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[sgls]="0x1"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[sgls]=0x1 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[mnan]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[mnan]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[maxdna]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[maxdna]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[maxcna]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[maxcna]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ioccsz]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ioccsz]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[iorcsz]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[iorcsz]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[icdoff]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[icdoff]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[fcatt]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[fcatt]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[msdbd]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[msdbd]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ofcs]="0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ofcs]=0 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2[active_power_workload]="-"' 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2[active_power_workload]=- 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme2_ns 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@58 -- # ns_dev=nvme2n1 00:12:31.302 08:25:18 nvme_scc -- nvme/functions.sh@59 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme2n1 reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme2n1=()' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nsze]=0x100000 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[ncap]=0x100000 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nuse]=0x100000 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nsfeat]=0x14 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nlbaf]=7 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[flbas]=0x4 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[mc]="0x3"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[mc]=0x3 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[dpc]=0x1f 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[dps]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[dps]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nmic]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nmic]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[rescap]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[rescap]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[fpi]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[fpi]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[dlfeat]=1 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawun]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nawun]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawupf]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nawupf]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nacwu]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nacwu]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabsn]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nabsn]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabo]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nabo]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabspf]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nabspf]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[noiob]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[noiob]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nvmcap]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwg]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[npwg]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwa]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[npwa]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[npdg]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[npdg]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[npda]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[npda]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nows]="0"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nows]=0 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[mssrl]="128"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[mssrl]=128 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[mcl]="128"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[mcl]=128 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[msrc]="127"' 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[msrc]=127 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.303 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nulbaf]=0 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[anagrpid]=0 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsattr]="0"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nsattr]=0 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nvmsetid]=0 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[endgid]="0"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[endgid]=0 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[eui64]=0000000000000000 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@58 -- # ns_dev=nvme2n2 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@59 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme2n2 reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme2n2=()' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nsze]=0x100000 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[ncap]=0x100000 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nuse]=0x100000 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:31.304 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nsfeat]=0x14 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nlbaf]=7 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[flbas]=0x4 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[mc]="0x3"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[mc]=0x3 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[dpc]=0x1f 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[dps]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[dps]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nmic]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nmic]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[rescap]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[rescap]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[fpi]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[fpi]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[dlfeat]=1 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawun]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nawun]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawupf]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nawupf]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nacwu]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nacwu]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabsn]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nabsn]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabo]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nabo]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabspf]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nabspf]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[noiob]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[noiob]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nvmcap]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwg]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[npwg]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwa]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[npwa]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[npdg]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[npdg]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[npda]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[npda]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nows]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nows]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[mssrl]="128"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[mssrl]=128 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[mcl]="128"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[mcl]=128 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[msrc]="127"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[msrc]=127 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nulbaf]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[anagrpid]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsattr]="0"' 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nsattr]=0 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.305 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nvmsetid]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[endgid]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[endgid]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[eui64]=0000000000000000 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@58 -- # ns_dev=nvme2n3 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@59 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme2n3 reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme2n3=()' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nsze]=0x100000 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[ncap]=0x100000 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nuse]=0x100000 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nsfeat]=0x14 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nlbaf]=7 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[flbas]=0x4 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[mc]="0x3"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[mc]=0x3 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[dpc]=0x1f 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[dps]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[dps]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nmic]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nmic]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[rescap]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[rescap]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[fpi]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[fpi]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[dlfeat]=1 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawun]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nawun]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawupf]="0"' 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nawupf]=0 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.306 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nacwu]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nacwu]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabsn]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nabsn]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabo]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nabo]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabspf]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nabspf]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[noiob]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[noiob]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nvmcap]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwg]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[npwg]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwa]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[npwa]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[npdg]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[npdg]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[npda]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[npda]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nows]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nows]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[mssrl]="128"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[mssrl]=128 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[mcl]="128"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[mcl]=128 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[msrc]="127"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[msrc]=127 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nulbaf]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[anagrpid]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsattr]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nsattr]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nvmsetid]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[endgid]="0"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[endgid]=0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[eui64]=0000000000000000 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme2 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme2_ns 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:12.0 00:12:31.307 08:25:18 nvme_scc -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme2 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@51 -- # pci=0000:00:13.0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@52 -- # pci_can_use 0000:00:13.0 00:12:31.308 08:25:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:31.308 08:25:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:31.308 08:25:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.308 08:25:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@53 -- # ctrl_dev=nvme3 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@54 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@19 -- # local ref=nvme3 reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@20 -- # shift 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@22 -- # local -gA 'nvme3=()' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[vid]="0x1b36"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[vid]=0x1b36 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ssvid]=0x1af4 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 12343 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[sn]="12343 "' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[sn]='12343 ' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fr]='8.0.0 ' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rab]="6"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rab]=6 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ieee]="525400"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ieee]=525400 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x2 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[cmic]="0x2"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[cmic]=0x2 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mdts]="7"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mdts]=7 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[cntlid]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[cntlid]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ver]="0x10400"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ver]=0x10400 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3r]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rtd3r]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3e]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rtd3e]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[oaes]="0x100"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[oaes]=0x100 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x88010 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ctratt]=0x88010 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rrls]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rrls]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[cntrltype]="1"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[cntrltype]=1 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[crdt1]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[crdt1]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[crdt2]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[crdt2]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[crdt3]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[crdt3]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[nvmsr]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[nvmsr]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[vwci]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[vwci]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mec]="0"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mec]=0 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[oacs]="0x12a"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[oacs]=0x12a 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[acl]="3"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[acl]=3 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[aerl]="3"' 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[aerl]=3 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.308 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[frmw]="0x3"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[frmw]=0x3 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[lpa]="0x7"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[lpa]=0x7 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[elpe]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[elpe]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[npss]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[npss]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[avscc]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[avscc]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[apsta]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[apsta]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[wctemp]="343"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[wctemp]=343 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[cctemp]="373"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[cctemp]=373 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mtfa]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mtfa]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[hmpre]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[hmpre]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[hmmin]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[hmmin]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[tnvmcap]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[tnvmcap]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[unvmcap]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[unvmcap]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rpmbs]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rpmbs]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[edstt]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[edstt]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[dsto]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[dsto]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fwug]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fwug]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[kas]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[kas]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[hctma]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[hctma]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mntmt]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mntmt]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mxtmt]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mxtmt]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[sanicap]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[sanicap]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[hmminds]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[hmminds]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[hmmaxd]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[hmmaxd]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[nsetidmax]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[nsetidmax]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[endgidmax]="1"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[endgidmax]=1 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[anatt]="0"' 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[anatt]=0 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.309 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[anacap]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[anacap]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[anagrpmax]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[anagrpmax]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[nanagrpid]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[nanagrpid]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[pels]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[pels]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[domainid]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[domainid]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[megcap]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[megcap]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[sqes]="0x66"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[sqes]=0x66 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[cqes]="0x44"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[cqes]=0x44 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[maxcmd]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[maxcmd]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[nn]="256"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[nn]=256 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[oncs]="0x15d"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[oncs]=0x15d 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fuses]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fuses]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fna]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fna]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[vwc]="0x7"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[vwc]=0x7 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[awun]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[awun]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[awupf]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[awupf]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[icsvscc]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[icsvscc]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[nwpc]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[nwpc]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[acwu]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[acwu]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ocfs]="0x3"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ocfs]=0x3 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[sgls]="0x1"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[sgls]=0x1 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[mnan]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[mnan]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[maxdna]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[maxdna]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[maxcna]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[maxcna]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ioccsz]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ioccsz]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[iorcsz]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[iorcsz]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[icdoff]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[icdoff]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[fcatt]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[fcatt]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[msdbd]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[msdbd]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ofcs]="0"' 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ofcs]=0 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.310 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # eval 'nvme3[active_power_workload]="-"' 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@25 -- # nvme3[active_power_workload]=- 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # IFS=: 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@23 -- # read -r reg val 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme3_ns 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme3_ns 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:13.0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@67 -- # (( 4 > 0 )) 00:12:31.311 08:25:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@206 -- # local _ctrls feature=scc 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@208 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@208 -- # get_ctrls_with_feature scc 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@194 -- # (( 4 == 0 )) 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@196 -- # local ctrl feature=scc 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@198 -- # type -t ctrl_has_scc 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@198 -- # [[ function == function ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # ctrl_has_scc nvme1 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@186 -- # local ctrl=nvme1 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # get_oncs nvme1 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@173 -- # local ctrl=nvme1 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@174 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@77 -- # [[ -n 0x15d ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@78 -- # echo 0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # oncs=0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@190 -- # (( oncs & 1 << 8 )) 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # echo nvme1 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # ctrl_has_scc nvme0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@186 -- # local ctrl=nvme0 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # get_oncs nvme0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@173 -- # local ctrl=nvme0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@174 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@77 -- # [[ -n 0x15d ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@78 -- # echo 0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # oncs=0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@190 -- # (( oncs & 1 << 8 )) 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # echo nvme0 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # ctrl_has_scc nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@186 -- # local ctrl=nvme3 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # get_oncs nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@173 -- # local ctrl=nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@174 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@71 -- # local ctrl=nvme3 reg=oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@73 -- # [[ -n nvme3 ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@75 -- # local -n _ctrl=nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@77 -- # [[ -n 0x15d ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@78 -- # echo 0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # oncs=0x15d 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@190 -- # (( oncs & 1 << 8 )) 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # echo nvme3 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # ctrl_has_scc nvme2 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@186 -- # local ctrl=nvme2 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # get_oncs nvme2 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@173 -- # local ctrl=nvme2 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@174 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@71 -- # local ctrl=nvme2 reg=oncs 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@73 -- # [[ -n nvme2 ]] 00:12:31.311 08:25:18 nvme_scc -- nvme/functions.sh@75 -- # local -n _ctrl=nvme2 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@77 -- # [[ -n 0x15d ]] 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@78 -- # echo 0x15d 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@188 -- # oncs=0x15d 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@190 -- # (( oncs & 1 << 8 )) 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@201 -- # echo nvme2 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@209 -- # (( 4 > 0 )) 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@210 -- # echo nvme1 00:12:31.570 08:25:18 nvme_scc -- nvme/functions.sh@211 -- # return 0 00:12:31.570 08:25:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:31.570 08:25:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:31.570 08:25:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:32.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.719 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.719 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.719 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.977 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.977 08:25:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:32.977 08:25:20 nvme_scc -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:12:32.977 08:25:20 nvme_scc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:32.977 08:25:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:32.977 ************************************ 00:12:32.977 START TEST nvme_simple_copy 00:12:32.977 ************************************ 00:12:32.977 08:25:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:33.236 Initializing NVMe Controllers 00:12:33.236 Attaching to 0000:00:10.0 00:12:33.236 Controller supports SCC. Attached to 0000:00:10.0 00:12:33.236 Namespace ID: 1 size: 6GB 00:12:33.236 Initialization complete. 00:12:33.236 00:12:33.236 Controller QEMU NVMe Ctrl (12340 ) 00:12:33.236 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:33.236 Namespace Block Size:4096 00:12:33.236 Writing LBAs 0 to 63 with Random Data 00:12:33.236 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:33.236 LBAs matching Written Data: 64 00:12:33.236 00:12:33.236 real 0m0.324s 00:12:33.236 user 0m0.128s 00:12:33.236 sys 0m0.093s 00:12:33.236 08:25:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:33.236 ************************************ 00:12:33.236 END TEST nvme_simple_copy 00:12:33.236 08:25:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:33.236 ************************************ 00:12:33.495 ************************************ 00:12:33.495 END TEST nvme_scc 00:12:33.495 ************************************ 00:12:33.495 00:12:33.495 real 0m9.040s 00:12:33.495 user 0m1.588s 00:12:33.495 sys 0m2.471s 00:12:33.495 08:25:20 nvme_scc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:33.495 08:25:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 08:25:20 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:33.495 08:25:20 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:33.495 08:25:20 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:33.495 08:25:20 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:33.496 08:25:20 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:33.496 08:25:20 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:33.496 08:25:20 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:33.496 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.496 ************************************ 00:12:33.496 START TEST nvme_fdp 00:12:33.496 ************************************ 00:12:33.496 08:25:20 nvme_fdp -- common/autotest_common.sh@1132 -- # test/nvme/nvme_fdp.sh 00:12:33.755 * Looking for test storage... 00:12:33.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:33.755 08:25:21 nvme_fdp -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:12:33.755 08:25:21 nvme_fdp -- common/autotest_common.sh@1638 -- # lcov --version 00:12:33.755 08:25:21 nvme_fdp -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:12:33.755 08:25:21 nvme_fdp -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.755 08:25:21 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:33.756 08:25:21 nvme_fdp -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.756 08:25:21 nvme_fdp -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:12:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.756 --rc genhtml_branch_coverage=1 00:12:33.756 --rc genhtml_function_coverage=1 00:12:33.756 --rc genhtml_legend=1 00:12:33.756 --rc geninfo_all_blocks=1 00:12:33.756 --rc geninfo_unexecuted_blocks=1 00:12:33.756 00:12:33.756 ' 00:12:33.756 08:25:21 nvme_fdp -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:12:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.756 --rc genhtml_branch_coverage=1 00:12:33.756 --rc genhtml_function_coverage=1 00:12:33.756 --rc genhtml_legend=1 00:12:33.756 --rc geninfo_all_blocks=1 00:12:33.756 --rc geninfo_unexecuted_blocks=1 00:12:33.756 00:12:33.756 ' 00:12:33.756 08:25:21 nvme_fdp -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:12:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.756 --rc genhtml_branch_coverage=1 00:12:33.756 --rc genhtml_function_coverage=1 00:12:33.756 --rc genhtml_legend=1 00:12:33.756 --rc geninfo_all_blocks=1 00:12:33.756 --rc geninfo_unexecuted_blocks=1 00:12:33.756 00:12:33.756 ' 00:12:33.756 08:25:21 nvme_fdp -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:12:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.756 --rc genhtml_branch_coverage=1 00:12:33.756 --rc genhtml_function_coverage=1 00:12:33.756 --rc genhtml_legend=1 00:12:33.756 --rc geninfo_all_blocks=1 00:12:33.756 --rc geninfo_unexecuted_blocks=1 00:12:33.756 00:12:33.756 ' 00:12:33.756 08:25:21 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.756 08:25:21 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.756 08:25:21 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.756 08:25:21 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.756 08:25:21 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.756 08:25:21 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:33.756 08:25:21 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/sync/functions.sh 00:12:33.756 08:25:21 nvme_fdp -- sync/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/sync/functions.sh 00:12:33.756 08:25:21 nvme_fdp -- sync/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/sync/../../../ 00:12:33.756 08:25:21 nvme_fdp -- sync/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@11 -- # ctrls_g=() 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@11 -- # declare -A ctrls_g 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@12 -- # nvmes_g=() 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@12 -- # declare -A nvmes_g 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@13 -- # bdfs_g=() 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@13 -- # declare -A bdfs_g 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@14 -- # ordered_ctrls_g=() 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@14 -- # declare -a ordered_ctrls_g 00:12:33.756 08:25:21 nvme_fdp -- nvme/functions.sh@16 -- # nvme_name= 00:12:33.756 08:25:21 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:33.756 08:25:21 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:34.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:34.585 Waiting for block devices as requested 00:12:34.585 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:34.845 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:34.845 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:35.104 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.485 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:40.485 08:25:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@51 -- # pci=0000:00:11.0 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@52 -- # pci_can_use 0000:00:11.0 00:12:40.485 08:25:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:40.485 08:25:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:40.485 08:25:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:40.485 08:25:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.485 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x1b36"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[vid]=0x1b36 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x1af4 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 12341 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="12341 "' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[sn]='12341 ' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fr]='8.0.0 ' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="6"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rab]=6 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="525400"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ieee]=525400 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="7"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mdts]=7 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[cntlid]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x100"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[oaes]=0x100 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x8000 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[nvmsr]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mec]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x12a"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[oacs]=0x12a 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="3"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[acl]=3 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x3"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[frmw]=0x3 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0x7"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[lpa]=0x7 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="0"' 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[elpe]=0 00:12:40.486 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[npss]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[avscc]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="343"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[wctemp]=343 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="373"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[cctemp]=373 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[edstt]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[dsto]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[sanicap]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.487 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[maxcmd]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="256"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[nn]=256 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x15d"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[oncs]=0x15d 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fna]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x7"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[vwc]=0x7 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[awun]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[awupf]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[icsvscc]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0x3"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ocfs]=0x3 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0x1"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[sgls]=0x1 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="-"' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0[active_power_workload]=- 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:40.488 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0x140000 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0x140000 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x140000 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x14 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=7 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0x4 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0x3"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[mc]=0x3 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0x1f 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=1 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nawun]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[npwg]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[npwa]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[npdg]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[npda]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nows]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="128"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=128 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="128"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[mcl]=128 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="127"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[msrc]=127 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.489 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:11.0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@51 -- # pci=0000:00:10.0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@52 -- # pci_can_use 0000:00:10.0 00:12:40.490 08:25:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:40.490 08:25:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:40.490 08:25:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:40.490 08:25:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@53 -- # ctrl_dev=nvme1 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@54 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme1 reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme1=()' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[vid]="0x1b36"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[vid]=0x1b36 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ssvid]=0x1af4 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 12340 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[sn]="12340 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[sn]='12340 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fr]='8.0.0 ' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rab]="6"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rab]=6 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ieee]="525400"' 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ieee]=525400 00:12:40.490 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[cmic]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[cmic]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mdts]="7"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mdts]=7 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[cntlid]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[cntlid]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ver]="0x10400"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ver]=0x10400 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3r]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rtd3r]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3e]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rtd3e]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[oaes]="0x100"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[oaes]=0x100 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ctratt]=0x8000 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rrls]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rrls]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[cntrltype]="1"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[cntrltype]=1 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[crdt1]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[crdt1]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[crdt2]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[crdt2]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[crdt3]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[crdt3]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[nvmsr]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[nvmsr]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[vwci]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[vwci]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mec]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mec]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[oacs]="0x12a"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[oacs]=0x12a 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[acl]="3"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[acl]=3 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[aerl]="3"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[aerl]=3 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[frmw]="0x3"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[frmw]=0x3 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[lpa]="0x7"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[lpa]=0x7 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[elpe]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[elpe]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[npss]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[npss]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[avscc]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[avscc]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[apsta]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[apsta]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[wctemp]="343"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[wctemp]=343 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[cctemp]="373"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[cctemp]=373 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mtfa]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mtfa]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[hmpre]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[hmpre]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[hmmin]="0"' 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[hmmin]=0 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.491 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[tnvmcap]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[tnvmcap]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[unvmcap]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[unvmcap]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rpmbs]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rpmbs]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[edstt]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[edstt]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[dsto]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[dsto]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fwug]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fwug]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[kas]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[kas]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[hctma]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[hctma]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mntmt]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mntmt]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mxtmt]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mxtmt]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[sanicap]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[sanicap]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[hmminds]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[hmminds]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[hmmaxd]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[hmmaxd]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[nsetidmax]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[nsetidmax]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[endgidmax]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[endgidmax]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[anatt]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[anatt]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[anacap]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[anacap]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[anagrpmax]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[anagrpmax]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[nanagrpid]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[nanagrpid]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[pels]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[pels]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[domainid]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[domainid]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[megcap]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[megcap]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[sqes]="0x66"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[sqes]=0x66 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[cqes]="0x44"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[cqes]=0x44 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[maxcmd]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[maxcmd]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[nn]="256"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[nn]=256 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[oncs]="0x15d"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[oncs]=0x15d 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fuses]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fuses]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fna]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fna]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[vwc]="0x7"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[vwc]=0x7 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[awun]="0"' 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[awun]=0 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.492 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[awupf]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[awupf]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[icsvscc]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[icsvscc]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[nwpc]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[nwpc]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[acwu]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[acwu]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ocfs]="0x3"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ocfs]=0x3 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[sgls]="0x1"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[sgls]=0x1 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[mnan]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[mnan]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[maxdna]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[maxdna]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[maxcna]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[maxcna]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ioccsz]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ioccsz]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[iorcsz]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[iorcsz]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[icdoff]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[icdoff]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[fcatt]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[fcatt]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[msdbd]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[msdbd]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ofcs]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ofcs]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1[active_power_workload]="-"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1[active_power_workload]=- 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme1_ns 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@58 -- # ns_dev=nvme1n1 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@59 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme1n1 reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme1n1=()' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nsze]=0x17a17a 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[ncap]=0x17a17a 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x17a17a ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nuse]=0x17a17a 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nsfeat]=0x14 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nlbaf]=7 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[flbas]=0x7 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[mc]="0x3"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[mc]=0x3 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[dpc]=0x1f 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[dps]="0"' 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[dps]=0 00:12:40.493 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nmic]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nmic]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[rescap]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[rescap]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[fpi]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[fpi]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[dlfeat]=1 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawun]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nawun]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawupf]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nawupf]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nacwu]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nacwu]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabsn]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nabsn]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabo]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nabo]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabspf]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nabspf]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[noiob]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[noiob]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nvmcap]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwg]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[npwg]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwa]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[npwa]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[npdg]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[npdg]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[npda]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[npda]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nows]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nows]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[mssrl]="128"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[mssrl]=128 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[mcl]="128"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[mcl]=128 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[msrc]="127"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[msrc]=127 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nulbaf]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[anagrpid]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsattr]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nsattr]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nvmsetid]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[endgid]="0"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[endgid]=0 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[eui64]=0000000000000000 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.494 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme1 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme1_ns 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:10.0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme1 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@51 -- # pci=0000:00:12.0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@52 -- # pci_can_use 0000:00:12.0 00:12:40.495 08:25:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:40.495 08:25:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:40.495 08:25:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:40.495 08:25:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@53 -- # ctrl_dev=nvme2 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@54 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme2 reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme2=()' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[vid]="0x1b36"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[vid]=0x1b36 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ssvid]=0x1af4 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 12342 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[sn]="12342 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[sn]='12342 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fr]='8.0.0 ' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rab]="6"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rab]=6 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ieee]="525400"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ieee]=525400 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[cmic]="0"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[cmic]=0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mdts]="7"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mdts]=7 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[cntlid]="0"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[cntlid]=0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ver]="0x10400"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ver]=0x10400 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3r]="0"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rtd3r]=0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rtd3e]="0"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rtd3e]=0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[oaes]="0x100"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[oaes]=0x100 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ctratt]=0x8000 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rrls]="0"' 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rrls]=0 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.495 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[cntrltype]="1"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[cntrltype]=1 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[crdt1]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[crdt1]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[crdt2]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[crdt2]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[crdt3]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[crdt3]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[nvmsr]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[nvmsr]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[vwci]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[vwci]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mec]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mec]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[oacs]="0x12a"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[oacs]=0x12a 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[acl]="3"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[acl]=3 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[aerl]="3"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[aerl]=3 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[frmw]="0x3"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[frmw]=0x3 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[lpa]="0x7"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[lpa]=0x7 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[elpe]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[elpe]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[npss]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[npss]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[avscc]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[avscc]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[apsta]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[apsta]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[wctemp]="343"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[wctemp]=343 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[cctemp]="373"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[cctemp]=373 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mtfa]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mtfa]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[hmpre]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[hmpre]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[hmmin]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[hmmin]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[tnvmcap]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[tnvmcap]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[unvmcap]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[unvmcap]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rpmbs]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rpmbs]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[edstt]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[edstt]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[dsto]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[dsto]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fwug]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fwug]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[kas]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[kas]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[hctma]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[hctma]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mntmt]="0"' 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mntmt]=0 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.496 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mxtmt]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mxtmt]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[sanicap]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[sanicap]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[hmminds]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[hmminds]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[hmmaxd]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[hmmaxd]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[nsetidmax]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[nsetidmax]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[endgidmax]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[endgidmax]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[anatt]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[anatt]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[anacap]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[anacap]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[anagrpmax]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[anagrpmax]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[nanagrpid]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[nanagrpid]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[pels]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[pels]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[domainid]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[domainid]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[megcap]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[megcap]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[sqes]="0x66"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[sqes]=0x66 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[cqes]="0x44"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[cqes]=0x44 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[maxcmd]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[maxcmd]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[nn]="256"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[nn]=256 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[oncs]="0x15d"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[oncs]=0x15d 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fuses]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fuses]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fna]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fna]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[vwc]="0x7"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[vwc]=0x7 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[awun]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[awun]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[awupf]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[awupf]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[icsvscc]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[icsvscc]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[nwpc]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[nwpc]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[acwu]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[acwu]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ocfs]="0x3"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ocfs]=0x3 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[sgls]="0x1"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[sgls]=0x1 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[mnan]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[mnan]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[maxdna]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[maxdna]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[maxcna]="0"' 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[maxcna]=0 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.497 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ioccsz]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ioccsz]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[iorcsz]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[iorcsz]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[icdoff]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[icdoff]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[fcatt]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[fcatt]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[msdbd]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[msdbd]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ofcs]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ofcs]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2[active_power_workload]="-"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2[active_power_workload]=- 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme2_ns 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@58 -- # ns_dev=nvme2n1 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@59 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme2n1 reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme2n1=()' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nsze]=0x100000 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[ncap]=0x100000 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nuse]=0x100000 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nsfeat]=0x14 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nlbaf]=7 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[flbas]=0x4 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[mc]="0x3"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[mc]=0x3 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[dpc]=0x1f 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[dps]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[dps]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nmic]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nmic]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[rescap]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[rescap]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[fpi]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[fpi]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[dlfeat]=1 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawun]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nawun]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nawupf]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nawupf]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nacwu]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nacwu]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabsn]="0"' 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nabsn]=0 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.498 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabo]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nabo]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nabspf]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nabspf]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[noiob]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[noiob]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nvmcap]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwg]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[npwg]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[npwa]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[npwa]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[npdg]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[npdg]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[npda]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[npda]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nows]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nows]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[mssrl]="128"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[mssrl]=128 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[mcl]="128"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[mcl]=128 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[msrc]="127"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[msrc]=127 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nulbaf]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[anagrpid]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nsattr]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nsattr]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nvmsetid]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[endgid]="0"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[endgid]=0 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[eui64]=0000000000000000 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@58 -- # ns_dev=nvme2n2 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@59 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme2n2 reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme2n2=()' 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.499 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nsze]=0x100000 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[ncap]=0x100000 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nuse]=0x100000 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nsfeat]=0x14 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nlbaf]=7 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[flbas]=0x4 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[mc]="0x3"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[mc]=0x3 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[dpc]=0x1f 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[dps]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[dps]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nmic]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nmic]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[rescap]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[rescap]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[fpi]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[fpi]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[dlfeat]=1 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawun]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nawun]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nawupf]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nawupf]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nacwu]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nacwu]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabsn]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nabsn]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabo]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nabo]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nabspf]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nabspf]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[noiob]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[noiob]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nvmcap]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwg]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[npwg]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[npwa]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[npwa]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[npdg]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[npdg]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[npda]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[npda]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nows]="0"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nows]=0 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[mssrl]="128"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[mssrl]=128 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[mcl]="128"' 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[mcl]=128 00:12:40.500 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[msrc]="127"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[msrc]=127 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nulbaf]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[anagrpid]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nsattr]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nsattr]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nvmsetid]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[endgid]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[endgid]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[eui64]=0000000000000000 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@58 -- # ns_dev=nvme2n3 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@59 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme2n3 reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme2n3=()' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nsze]=0x100000 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[ncap]=0x100000 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nuse]=0x100000 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nsfeat]=0x14 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nlbaf]=7 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[flbas]=0x4 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[mc]="0x3"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[mc]=0x3 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[dpc]=0x1f 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[dps]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[dps]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nmic]="0"' 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nmic]=0 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.501 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[rescap]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[rescap]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[fpi]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[fpi]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[dlfeat]=1 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawun]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nawun]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nawupf]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nawupf]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nacwu]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nacwu]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabsn]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nabsn]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabo]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nabo]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nabspf]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nabspf]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[noiob]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[noiob]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nvmcap]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwg]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[npwg]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[npwa]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[npwa]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[npdg]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[npdg]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[npda]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[npda]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nows]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nows]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[mssrl]="128"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[mssrl]=128 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[mcl]="128"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[mcl]=128 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[msrc]="127"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[msrc]=127 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nulbaf]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[anagrpid]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nsattr]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nsattr]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nvmsetid]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[endgid]="0"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[endgid]=0 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:27 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[eui64]=0000000000000000 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:40.502 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme2 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme2_ns 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:12.0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme2 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@51 -- # pci=0000:00:13.0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@52 -- # pci_can_use 0000:00:13.0 00:12:40.503 08:25:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:40.503 08:25:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:40.503 08:25:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:40.503 08:25:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@53 -- # ctrl_dev=nvme3 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@54 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@19 -- # local ref=nvme3 reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@20 -- # shift 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@22 -- # local -gA 'nvme3=()' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@18 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[vid]="0x1b36"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[vid]=0x1b36 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ssvid]=0x1af4 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 12343 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[sn]="12343 "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[sn]='12343 ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fr]='8.0.0 ' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rab]="6"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rab]=6 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ieee]="525400"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ieee]=525400 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x2 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[cmic]="0x2"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[cmic]=0x2 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mdts]="7"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mdts]=7 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[cntlid]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[cntlid]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ver]="0x10400"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ver]=0x10400 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3r]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rtd3r]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rtd3e]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rtd3e]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[oaes]="0x100"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[oaes]=0x100 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x88010 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ctratt]=0x88010 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rrls]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rrls]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[cntrltype]="1"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[cntrltype]=1 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[crdt1]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[crdt1]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[crdt2]="0"' 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[crdt2]=0 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.503 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[crdt3]="0"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[crdt3]=0 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[nvmsr]="0"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[nvmsr]=0 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[vwci]="0"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[vwci]=0 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mec]="0"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mec]=0 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[oacs]="0x12a"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[oacs]=0x12a 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[acl]="3"' 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[acl]=3 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:12:40.765 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[aerl]="3"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[aerl]=3 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[frmw]="0x3"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[frmw]=0x3 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[lpa]="0x7"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[lpa]=0x7 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[elpe]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[elpe]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[npss]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[npss]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[avscc]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[avscc]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[apsta]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[apsta]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[wctemp]="343"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[wctemp]=343 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[cctemp]="373"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[cctemp]=373 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mtfa]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mtfa]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[hmpre]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[hmpre]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[hmmin]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[hmmin]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[tnvmcap]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[tnvmcap]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[unvmcap]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[unvmcap]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rpmbs]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rpmbs]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[edstt]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[edstt]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[dsto]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[dsto]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fwug]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fwug]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[kas]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[kas]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[hctma]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[hctma]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mntmt]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mntmt]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mxtmt]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mxtmt]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[sanicap]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[sanicap]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[hmminds]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[hmminds]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[hmmaxd]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[hmmaxd]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[nsetidmax]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[nsetidmax]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[endgidmax]="1"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[endgidmax]=1 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[anatt]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[anatt]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[anacap]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[anacap]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[anagrpmax]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[anagrpmax]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[nanagrpid]="0"' 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[nanagrpid]=0 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.766 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[pels]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[pels]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[domainid]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[domainid]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[megcap]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[megcap]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[sqes]="0x66"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[sqes]=0x66 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[cqes]="0x44"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[cqes]=0x44 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[maxcmd]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[maxcmd]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[nn]="256"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[nn]=256 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[oncs]="0x15d"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[oncs]=0x15d 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fuses]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fuses]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fna]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fna]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[vwc]="0x7"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[vwc]=0x7 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[awun]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[awun]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[awupf]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[awupf]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[icsvscc]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[icsvscc]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[nwpc]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[nwpc]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[acwu]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[acwu]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ocfs]="0x3"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ocfs]=0x3 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[sgls]="0x1"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[sgls]=0x1 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[mnan]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[mnan]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[maxdna]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[maxdna]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[maxcna]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[maxcna]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ioccsz]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ioccsz]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[iorcsz]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[iorcsz]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[icdoff]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[icdoff]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[fcatt]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[fcatt]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[msdbd]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[msdbd]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ofcs]="0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ofcs]=0 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@24 -- # [[ -n - ]] 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # eval 'nvme3[active_power_workload]="-"' 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@25 -- # nvme3[active_power_workload]=- 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # IFS=: 00:12:40.767 08:25:28 nvme_fdp -- nvme/functions.sh@23 -- # read -r reg val 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme3_ns 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme3_ns 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:13.0 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@67 -- # (( 4 > 0 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@206 -- # local _ctrls feature=fdp 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@208 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@208 -- # get_ctrls_with_feature fdp 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@194 -- # (( 4 == 0 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@196 -- # local ctrl feature=fdp 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@198 -- # type -t ctrl_has_fdp 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@198 -- # [[ function == function ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@201 -- # ctrl_has_fdp nvme1 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@178 -- # local ctrl=nvme1 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # get_ctratt nvme1 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@168 -- # local ctrl=nvme1 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@169 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@77 -- # [[ -n 0x8000 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@78 -- # echo 0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # ctratt=0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@182 -- # (( ctratt & 1 << 19 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@201 -- # ctrl_has_fdp nvme0 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@178 -- # local ctrl=nvme0 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # get_ctratt nvme0 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@168 -- # local ctrl=nvme0 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@169 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@77 -- # [[ -n 0x8000 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@78 -- # echo 0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # ctratt=0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@182 -- # (( ctratt & 1 << 19 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@201 -- # ctrl_has_fdp nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@178 -- # local ctrl=nvme3 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # get_ctratt nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@168 -- # local ctrl=nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@169 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@71 -- # local ctrl=nvme3 reg=ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@73 -- # [[ -n nvme3 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@75 -- # local -n _ctrl=nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@77 -- # [[ -n 0x88010 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@78 -- # echo 0x88010 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # ctratt=0x88010 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@182 -- # (( ctratt & 1 << 19 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@201 -- # echo nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@200 -- # for ctrl in "${!ctrls_g[@]}" 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@201 -- # ctrl_has_fdp nvme2 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@178 -- # local ctrl=nvme2 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # get_ctratt nvme2 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@168 -- # local ctrl=nvme2 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@169 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@71 -- # local ctrl=nvme2 reg=ctratt 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@73 -- # [[ -n nvme2 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@75 -- # local -n _ctrl=nvme2 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@77 -- # [[ -n 0x8000 ]] 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@78 -- # echo 0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@180 -- # ctratt=0x8000 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@182 -- # (( ctratt & 1 << 19 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@209 -- # (( 1 > 0 )) 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@210 -- # echo nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/functions.sh@211 -- # return 0 00:12:40.768 08:25:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:40.768 08:25:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:40.768 08:25:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:41.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.276 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:42.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:42.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:42.276 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:42.276 08:25:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:42.276 08:25:29 nvme_fdp -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:12:42.276 08:25:29 nvme_fdp -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:42.276 08:25:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:42.276 ************************************ 00:12:42.276 START TEST nvme_flexible_data_placement 00:12:42.276 ************************************ 00:12:42.276 08:25:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:42.535 Initializing NVMe Controllers 00:12:42.535 Attaching to 0000:00:13.0 00:12:42.535 Controller supports FDP Attached to 0000:00:13.0 00:12:42.535 Namespace ID: 1 Endurance Group ID: 1 00:12:42.535 Initialization complete. 00:12:42.535 00:12:42.535 ================================== 00:12:42.535 == FDP tests for Namespace: #01 == 00:12:42.535 ================================== 00:12:42.535 00:12:42.535 Get Feature: FDP: 00:12:42.535 ================= 00:12:42.535 Enabled: Yes 00:12:42.535 FDP configuration Index: 0 00:12:42.535 00:12:42.535 FDP configurations log page 00:12:42.535 =========================== 00:12:42.535 Number of FDP configurations: 1 00:12:42.535 Version: 0 00:12:42.535 Size: 112 00:12:42.535 FDP Configuration Descriptor: 0 00:12:42.535 Descriptor Size: 96 00:12:42.535 Reclaim Group Identifier format: 2 00:12:42.535 FDP Volatile Write Cache: Not Present 00:12:42.535 FDP Configuration: Valid 00:12:42.535 Vendor Specific Size: 0 00:12:42.535 Number of Reclaim Groups: 2 00:12:42.535 Number of Recalim Unit Handles: 8 00:12:42.535 Max Placement Identifiers: 128 00:12:42.535 Number of Namespaces Suppprted: 256 00:12:42.535 Reclaim unit Nominal Size: 6000000 bytes 00:12:42.535 Estimated Reclaim Unit Time Limit: Not Reported 00:12:42.535 RUH Desc #000: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #001: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #002: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #003: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #004: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #005: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #006: RUH Type: Initially Isolated 00:12:42.535 RUH Desc #007: RUH Type: Initially Isolated 00:12:42.535 00:12:42.535 FDP reclaim unit handle usage log page 00:12:42.535 ====================================== 00:12:42.535 Number of Reclaim Unit Handles: 8 00:12:42.535 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:42.535 RUH Usage Desc #001: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #002: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #003: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #004: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #005: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #006: RUH Attributes: Unused 00:12:42.535 RUH Usage Desc #007: RUH Attributes: Unused 00:12:42.535 00:12:42.535 FDP statistics log page 00:12:42.535 ======================= 00:12:42.535 Host bytes with metadata written: 964820992 00:12:42.535 Media bytes with metadata written: 964976640 00:12:42.536 Media bytes erased: 0 00:12:42.536 00:12:42.536 FDP Reclaim unit handle status 00:12:42.536 ============================== 00:12:42.536 Number of RUHS descriptors: 2 00:12:42.536 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000027e0 00:12:42.536 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:42.536 00:12:42.536 FDP write on placement id: 0 success 00:12:42.536 00:12:42.536 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:42.536 00:12:42.536 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:42.536 00:12:42.536 Get Feature: FDP Events for Placement handle: #0 00:12:42.536 ======================== 00:12:42.536 Number of FDP Events: 6 00:12:42.536 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:42.536 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:42.536 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:42.536 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:42.536 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:42.536 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:42.536 00:12:42.536 FDP events log page 00:12:42.536 =================== 00:12:42.536 Number of FDP events: 1 00:12:42.536 FDP Event #0: 00:12:42.536 Event Type: RU Not Written to Capacity 00:12:42.536 Placement Identifier: Valid 00:12:42.536 NSID: Valid 00:12:42.536 Location: Valid 00:12:42.536 Placement Identifier: 0 00:12:42.536 Event Timestamp: 7 00:12:42.536 Namespace Identifier: 1 00:12:42.536 Reclaim Group Identifier: 0 00:12:42.536 Reclaim Unit Handle Identifier: 0 00:12:42.536 00:12:42.536 FDP test passed 00:12:42.796 00:12:42.796 real 0m0.292s 00:12:42.796 user 0m0.092s 00:12:42.796 sys 0m0.099s 00:12:42.796 08:25:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:42.796 08:25:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 ************************************ 00:12:42.796 END TEST nvme_flexible_data_placement 00:12:42.796 ************************************ 00:12:42.796 00:12:42.796 real 0m9.256s 00:12:42.796 user 0m1.756s 00:12:42.796 sys 0m2.533s 00:12:42.796 08:25:30 nvme_fdp -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:42.796 08:25:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 ************************************ 00:12:42.796 END TEST nvme_fdp 00:12:42.796 ************************************ 00:12:42.796 08:25:30 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:42.796 08:25:30 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:42.796 08:25:30 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:42.796 08:25:30 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:42.796 08:25:30 -- common/autotest_common.sh@10 -- # set +x 00:12:42.796 ************************************ 00:12:42.796 START TEST nvme_rpc 00:12:42.796 ************************************ 00:12:42.796 08:25:30 nvme_rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:43.055 * Looking for test storage... 00:12:43.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.055 08:25:30 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:12:43.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.055 --rc genhtml_branch_coverage=1 00:12:43.055 --rc genhtml_function_coverage=1 00:12:43.055 --rc genhtml_legend=1 00:12:43.055 --rc geninfo_all_blocks=1 00:12:43.055 --rc geninfo_unexecuted_blocks=1 00:12:43.055 00:12:43.055 ' 00:12:43.055 08:25:30 nvme_rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:12:43.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.055 --rc genhtml_branch_coverage=1 00:12:43.055 --rc genhtml_function_coverage=1 00:12:43.055 --rc genhtml_legend=1 00:12:43.055 --rc geninfo_all_blocks=1 00:12:43.055 --rc geninfo_unexecuted_blocks=1 00:12:43.055 00:12:43.055 ' 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:12:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.056 --rc genhtml_branch_coverage=1 00:12:43.056 --rc genhtml_function_coverage=1 00:12:43.056 --rc genhtml_legend=1 00:12:43.056 --rc geninfo_all_blocks=1 00:12:43.056 --rc geninfo_unexecuted_blocks=1 00:12:43.056 00:12:43.056 ' 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:12:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.056 --rc genhtml_branch_coverage=1 00:12:43.056 --rc genhtml_function_coverage=1 00:12:43.056 --rc genhtml_legend=1 00:12:43.056 --rc geninfo_all_blocks=1 00:12:43.056 --rc geninfo_unexecuted_blocks=1 00:12:43.056 00:12:43.056 ' 00:12:43.056 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.056 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=() 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1497 -- # local bdfs 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=($(get_nvme_bdfs)) 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1498 -- # get_nvme_bdfs 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1486 -- # bdfs=() 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1486 -- # local bdfs 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:12:43.056 08:25:30 nvme_rpc -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@1488 -- # (( 4 == 0 )) 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@1500 -- # echo 0000:00:10.0 00:12:43.315 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:43.315 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66796 00:12:43.315 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:43.315 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:43.315 08:25:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66796 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@838 -- # '[' -z 66796 ']' 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:12:43.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:12:43.315 08:25:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.315 [2024-11-20 08:25:30.735027] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:12:43.315 [2024-11-20 08:25:30.735150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66796 ] 00:12:43.574 [2024-11-20 08:25:30.919212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:43.574 [2024-11-20 08:25:31.041403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.574 [2024-11-20 08:25:31.041446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.512 08:25:31 nvme_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:12:44.512 08:25:31 nvme_rpc -- common/autotest_common.sh@871 -- # return 0 00:12:44.512 08:25:31 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:44.772 Nvme0n1 00:12:44.772 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:44.772 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:45.031 request: 00:12:45.031 { 00:12:45.031 "bdev_name": "Nvme0n1", 00:12:45.031 "filename": "non_existing_file", 00:12:45.031 "method": "bdev_nvme_apply_firmware", 00:12:45.031 "req_id": 1 00:12:45.031 } 00:12:45.031 Got JSON-RPC error response 00:12:45.031 response: 00:12:45.031 { 00:12:45.031 "code": -32603, 00:12:45.031 "message": "open file failed." 00:12:45.031 } 00:12:45.031 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:45.031 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:45.031 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:45.290 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:45.290 08:25:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66796 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@957 -- # '[' -z 66796 ']' 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@961 -- # kill -0 66796 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@962 -- # uname 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 66796 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:12:45.290 killing process with pid 66796 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 66796' 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@976 -- # kill 66796 00:12:45.290 08:25:32 nvme_rpc -- common/autotest_common.sh@981 -- # wait 66796 00:12:47.827 00:12:47.827 real 0m4.745s 00:12:47.827 user 0m8.658s 00:12:47.827 sys 0m0.816s 00:12:47.827 08:25:34 nvme_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:47.827 ************************************ 00:12:47.827 END TEST nvme_rpc 00:12:47.827 ************************************ 00:12:47.827 08:25:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.827 08:25:35 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:47.827 08:25:35 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:47.827 08:25:35 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:47.827 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:12:47.827 ************************************ 00:12:47.827 START TEST nvme_rpc_timeouts 00:12:47.827 ************************************ 00:12:47.827 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:47.827 * Looking for test storage... 00:12:47.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:47.827 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:12:47.827 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1638 -- # lcov --version 00:12:47.827 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:12:47.827 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.827 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.828 08:25:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:12:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.828 --rc genhtml_branch_coverage=1 00:12:47.828 --rc genhtml_function_coverage=1 00:12:47.828 --rc genhtml_legend=1 00:12:47.828 --rc geninfo_all_blocks=1 00:12:47.828 --rc geninfo_unexecuted_blocks=1 00:12:47.828 00:12:47.828 ' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:12:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.828 --rc genhtml_branch_coverage=1 00:12:47.828 --rc genhtml_function_coverage=1 00:12:47.828 --rc genhtml_legend=1 00:12:47.828 --rc geninfo_all_blocks=1 00:12:47.828 --rc geninfo_unexecuted_blocks=1 00:12:47.828 00:12:47.828 ' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:12:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.828 --rc genhtml_branch_coverage=1 00:12:47.828 --rc genhtml_function_coverage=1 00:12:47.828 --rc genhtml_legend=1 00:12:47.828 --rc geninfo_all_blocks=1 00:12:47.828 --rc geninfo_unexecuted_blocks=1 00:12:47.828 00:12:47.828 ' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:12:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.828 --rc genhtml_branch_coverage=1 00:12:47.828 --rc genhtml_function_coverage=1 00:12:47.828 --rc genhtml_legend=1 00:12:47.828 --rc geninfo_all_blocks=1 00:12:47.828 --rc geninfo_unexecuted_blocks=1 00:12:47.828 00:12:47.828 ' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66873 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66873 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66917 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:47.828 08:25:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66917 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # '[' -z 66917 ']' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@843 -- # local max_retries=100 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@847 -- # xtrace_disable 00:12:47.828 08:25:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:48.086 [2024-11-20 08:25:35.452758] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:12:48.086 [2024-11-20 08:25:35.452873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66917 ] 00:12:48.086 [2024-11-20 08:25:35.636215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.345 [2024-11-20 08:25:35.756772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.345 [2024-11-20 08:25:35.756781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.281 08:25:36 nvme_rpc_timeouts -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:12:49.281 Checking default timeout settings: 00:12:49.281 08:25:36 nvme_rpc_timeouts -- common/autotest_common.sh@871 -- # return 0 00:12:49.281 08:25:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:49.281 08:25:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:49.540 Making settings changes with rpc: 00:12:49.540 08:25:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:49.540 08:25:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:49.799 Check default vs. modified settings: 00:12:49.799 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:49.799 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66873 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66873 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:50.058 Setting action_on_timeout is changed as expected. 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66873 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66873 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:50.058 Setting timeout_us is changed as expected. 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66873 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:50.058 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66873 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:50.317 Setting timeout_admin_us is changed as expected. 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66873 /tmp/settings_modified_66873 00:12:50.317 08:25:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66917 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' -z 66917 ']' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@961 -- # kill -0 66917 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # uname 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 66917 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:12:50.317 killing process with pid 66917 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@975 -- # echo 'killing process with pid 66917' 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # kill 66917 00:12:50.317 08:25:37 nvme_rpc_timeouts -- common/autotest_common.sh@981 -- # wait 66917 00:12:52.848 RPC TIMEOUT SETTING TEST PASSED. 00:12:52.848 08:25:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:52.848 00:12:52.848 real 0m5.200s 00:12:52.848 user 0m9.805s 00:12:52.848 sys 0m0.842s 00:12:52.848 08:25:40 nvme_rpc_timeouts -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:52.848 08:25:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:52.848 ************************************ 00:12:52.848 END TEST nvme_rpc_timeouts 00:12:52.848 ************************************ 00:12:52.848 08:25:40 -- spdk/autotest.sh@239 -- # uname -s 00:12:52.848 08:25:40 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:52.848 08:25:40 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:52.848 08:25:40 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:12:52.848 08:25:40 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:52.848 08:25:40 -- common/autotest_common.sh@10 -- # set +x 00:12:52.848 ************************************ 00:12:52.848 START TEST sw_hotplug 00:12:52.848 ************************************ 00:12:52.849 08:25:40 sw_hotplug -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:53.107 * Looking for test storage... 00:12:53.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:53.107 08:25:40 sw_hotplug -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:12:53.107 08:25:40 sw_hotplug -- common/autotest_common.sh@1638 -- # lcov --version 00:12:53.107 08:25:40 sw_hotplug -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:12:53.107 08:25:40 sw_hotplug -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.107 08:25:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.108 08:25:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:53.108 08:25:40 sw_hotplug -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.108 08:25:40 sw_hotplug -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 08:25:40 sw_hotplug -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 08:25:40 sw_hotplug -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 08:25:40 sw_hotplug -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 08:25:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:53.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.935 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.935 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.935 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.935 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.935 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:53.935 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:53.935 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:53.935 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:53.935 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:54.196 08:25:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:54.196 08:25:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:54.196 08:25:41 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:54.196 08:25:41 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:54.196 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:54.196 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:54.196 08:25:41 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:54.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:54.817 Waiting for block devices as requested 00:12:55.092 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.092 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.092 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.350 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:00.617 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:00.617 08:25:47 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:00.617 08:25:47 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:00.876 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:01.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.134 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:01.393 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:01.651 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:01.651 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67811 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@712 -- # local cmd_es=0 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@714 -- # [[ -t 0 ]] 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@714 -- # exec 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@716 -- # local time=0 TIMEFORMAT=%2R 00:13:01.909 08:25:49 sw_hotplug -- common/autotest_common.sh@722 -- # remove_attach_helper 3 6 false 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:01.909 08:25:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:02.168 Initializing NVMe Controllers 00:13:02.168 Attaching to 0000:00:10.0 00:13:02.168 Attaching to 0000:00:11.0 00:13:02.168 Attached to 0000:00:11.0 00:13:02.168 Attached to 0000:00:10.0 00:13:02.168 Initialization complete. Starting I/O... 00:13:02.168 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:02.168 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:02.168 00:13:03.103 QEMU NVMe Ctrl (12341 ): 1576 I/Os completed (+1576) 00:13:03.103 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1576) 00:13:03.103 00:13:04.478 QEMU NVMe Ctrl (12341 ): 3748 I/Os completed (+2172) 00:13:04.478 QEMU NVMe Ctrl (12340 ): 3748 I/Os completed (+2172) 00:13:04.478 00:13:05.414 QEMU NVMe Ctrl (12341 ): 5968 I/Os completed (+2220) 00:13:05.414 QEMU NVMe Ctrl (12340 ): 5968 I/Os completed (+2220) 00:13:05.414 00:13:06.349 QEMU NVMe Ctrl (12341 ): 8188 I/Os completed (+2220) 00:13:06.349 QEMU NVMe Ctrl (12340 ): 8188 I/Os completed (+2220) 00:13:06.349 00:13:07.286 QEMU NVMe Ctrl (12341 ): 10412 I/Os completed (+2224) 00:13:07.286 QEMU NVMe Ctrl (12340 ): 10412 I/Os completed (+2224) 00:13:07.286 00:13:07.853 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:07.853 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:07.853 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:07.853 [2024-11-20 08:25:55.402843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:07.853 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:07.854 [2024-11-20 08:25:55.404718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.404772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.404795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.404817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:07.854 [2024-11-20 08:25:55.407523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.407574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.407592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.854 [2024-11-20 08:25:55.407611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.112 [2024-11-20 08:25:55.445434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:08.112 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:08.112 [2024-11-20 08:25:55.446998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.447048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.447078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.447098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:08.112 [2024-11-20 08:25:55.449592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.449635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.449657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 [2024-11-20 08:25:55.449674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.112 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:08.112 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:08.371 Attaching to 0000:00:10.0 00:13:08.371 Attached to 0000:00:10.0 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:08.371 08:25:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:08.371 Attaching to 0000:00:11.0 00:13:08.371 Attached to 0000:00:11.0 00:13:09.307 QEMU NVMe Ctrl (12340 ): 2064 I/Os completed (+2064) 00:13:09.307 QEMU NVMe Ctrl (12341 ): 1772 I/Os completed (+1772) 00:13:09.307 00:13:10.241 QEMU NVMe Ctrl (12340 ): 4310 I/Os completed (+2246) 00:13:10.241 QEMU NVMe Ctrl (12341 ): 4020 I/Os completed (+2248) 00:13:10.241 00:13:11.264 QEMU NVMe Ctrl (12340 ): 6558 I/Os completed (+2248) 00:13:11.264 QEMU NVMe Ctrl (12341 ): 6268 I/Os completed (+2248) 00:13:11.264 00:13:12.212 QEMU NVMe Ctrl (12340 ): 8794 I/Os completed (+2236) 00:13:12.212 QEMU NVMe Ctrl (12341 ): 8506 I/Os completed (+2238) 00:13:12.212 00:13:13.148 QEMU NVMe Ctrl (12340 ): 11034 I/Os completed (+2240) 00:13:13.148 QEMU NVMe Ctrl (12341 ): 10746 I/Os completed (+2240) 00:13:13.148 00:13:14.084 QEMU NVMe Ctrl (12340 ): 13262 I/Os completed (+2228) 00:13:14.084 QEMU NVMe Ctrl (12341 ): 12974 I/Os completed (+2228) 00:13:14.084 00:13:15.463 QEMU NVMe Ctrl (12340 ): 15482 I/Os completed (+2220) 00:13:15.463 QEMU NVMe Ctrl (12341 ): 15194 I/Os completed (+2220) 00:13:15.463 00:13:16.399 QEMU NVMe Ctrl (12340 ): 17686 I/Os completed (+2204) 00:13:16.399 QEMU NVMe Ctrl (12341 ): 17398 I/Os completed (+2204) 00:13:16.399 00:13:17.335 QEMU NVMe Ctrl (12340 ): 19914 I/Os completed (+2228) 00:13:17.335 QEMU NVMe Ctrl (12341 ): 19626 I/Os completed (+2228) 00:13:17.335 00:13:18.271 QEMU NVMe Ctrl (12340 ): 22146 I/Os completed (+2232) 00:13:18.271 QEMU NVMe Ctrl (12341 ): 21858 I/Os completed (+2232) 00:13:18.271 00:13:19.206 QEMU NVMe Ctrl (12340 ): 24410 I/Os completed (+2264) 00:13:19.206 QEMU NVMe Ctrl (12341 ): 24123 I/Os completed (+2265) 00:13:19.206 00:13:20.142 QEMU NVMe Ctrl (12340 ): 26638 I/Os completed (+2228) 00:13:20.142 QEMU NVMe Ctrl (12341 ): 26351 I/Os completed (+2228) 00:13:20.142 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:20.401 [2024-11-20 08:26:07.840916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:20.401 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:20.401 [2024-11-20 08:26:07.842749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.842903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.842960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.843078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:20.401 [2024-11-20 08:26:07.845995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.846050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.846068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.846087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 EAL: Cannot open sysfs resource 00:13:20.401 EAL: pci_scan_one(): cannot parse resource 00:13:20.401 EAL: Scan for (pci) bus failed. 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:20.401 [2024-11-20 08:26:07.880428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:20.401 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:20.401 [2024-11-20 08:26:07.882100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.882144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.882171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.882193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:20.401 [2024-11-20 08:26:07.884767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.884807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.884831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 [2024-11-20 08:26:07.884854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:20.401 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:20.660 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.660 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.660 08:26:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:20.660 Attaching to 0000:00:10.0 00:13:20.660 Attached to 0000:00:10.0 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.660 08:26:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:20.660 Attaching to 0000:00:11.0 00:13:20.660 Attached to 0000:00:11.0 00:13:21.228 QEMU NVMe Ctrl (12340 ): 1104 I/Os completed (+1104) 00:13:21.228 QEMU NVMe Ctrl (12341 ): 860 I/Os completed (+860) 00:13:21.228 00:13:22.162 QEMU NVMe Ctrl (12340 ): 3332 I/Os completed (+2228) 00:13:22.162 QEMU NVMe Ctrl (12341 ): 3088 I/Os completed (+2228) 00:13:22.162 00:13:23.097 QEMU NVMe Ctrl (12340 ): 5588 I/Os completed (+2256) 00:13:23.097 QEMU NVMe Ctrl (12341 ): 5347 I/Os completed (+2259) 00:13:23.097 00:13:24.472 QEMU NVMe Ctrl (12340 ): 7832 I/Os completed (+2244) 00:13:24.472 QEMU NVMe Ctrl (12341 ): 7591 I/Os completed (+2244) 00:13:24.472 00:13:25.039 QEMU NVMe Ctrl (12340 ): 10096 I/Os completed (+2264) 00:13:25.039 QEMU NVMe Ctrl (12341 ): 9858 I/Os completed (+2267) 00:13:25.039 00:13:26.415 QEMU NVMe Ctrl (12340 ): 12316 I/Os completed (+2220) 00:13:26.415 QEMU NVMe Ctrl (12341 ): 12078 I/Os completed (+2220) 00:13:26.415 00:13:27.354 QEMU NVMe Ctrl (12340 ): 14540 I/Os completed (+2224) 00:13:27.354 QEMU NVMe Ctrl (12341 ): 14305 I/Os completed (+2227) 00:13:27.354 00:13:28.291 QEMU NVMe Ctrl (12340 ): 16740 I/Os completed (+2200) 00:13:28.291 QEMU NVMe Ctrl (12341 ): 16507 I/Os completed (+2202) 00:13:28.291 00:13:29.228 QEMU NVMe Ctrl (12340 ): 18948 I/Os completed (+2208) 00:13:29.228 QEMU NVMe Ctrl (12341 ): 18713 I/Os completed (+2206) 00:13:29.228 00:13:30.165 QEMU NVMe Ctrl (12340 ): 21152 I/Os completed (+2204) 00:13:30.165 QEMU NVMe Ctrl (12341 ): 20917 I/Os completed (+2204) 00:13:30.165 00:13:31.101 QEMU NVMe Ctrl (12340 ): 23368 I/Os completed (+2216) 00:13:31.101 QEMU NVMe Ctrl (12341 ): 23133 I/Os completed (+2216) 00:13:31.101 00:13:32.037 QEMU NVMe Ctrl (12340 ): 25580 I/Os completed (+2212) 00:13:32.037 QEMU NVMe Ctrl (12341 ): 25345 I/Os completed (+2212) 00:13:32.037 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.973 [2024-11-20 08:26:20.217478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:32.973 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:32.973 [2024-11-20 08:26:20.219342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.219506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.219561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.219662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:32.973 [2024-11-20 08:26:20.222622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.222769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.222819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.222928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.973 [2024-11-20 08:26:20.256062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:32.973 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:32.973 [2024-11-20 08:26:20.257679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.257913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.257971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.258075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:32.973 [2024-11-20 08:26:20.260693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.260817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.260872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 [2024-11-20 08:26:20.260957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:32.973 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:32.973 EAL: Scan for (pci) bus failed. 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:32.973 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:32.973 Attaching to 0000:00:10.0 00:13:32.973 Attached to 0000:00:10.0 00:13:33.235 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:33.235 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.235 08:26:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:33.235 Attaching to 0000:00:11.0 00:13:33.235 Attached to 0000:00:11.0 00:13:33.235 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:33.235 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:33.235 [2024-11-20 08:26:20.575387] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:45.439 08:26:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:45.439 08:26:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:45.439 08:26:32 sw_hotplug -- common/autotest_common.sh@722 -- # time=43.17 00:13:45.439 08:26:32 sw_hotplug -- common/autotest_common.sh@723 -- # echo 43.17 00:13:45.439 08:26:32 sw_hotplug -- common/autotest_common.sh@725 -- # return 0 00:13:45.439 08:26:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.17 00:13:45.439 08:26:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2 00:13:45.439 remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 08:26:32 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67811 00:13:52.066 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67811) - No such process 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67811 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68358 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68358 00:13:52.066 08:26:38 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@838 -- # '[' -z 68358 ']' 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:52.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:52.067 08:26:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.067 [2024-11-20 08:26:38.688217] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:52.067 [2024-11-20 08:26:38.688833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68358 ] 00:13:52.067 [2024-11-20 08:26:38.870922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.067 [2024-11-20 08:26:38.984700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@871 -- # return 0 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@712 -- # local cmd_es=0 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@714 -- # [[ -t 0 ]] 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@714 -- # exec 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@716 -- # local time=0 TIMEFORMAT=%2R 00:13:52.325 08:26:39 sw_hotplug -- common/autotest_common.sh@722 -- # remove_attach_helper 3 6 true 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:52.325 08:26:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.874 08:26:45 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:58.874 08:26:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.874 [2024-11-20 08:26:45.906225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:58.874 [2024-11-20 08:26:45.909316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:45.909364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:45.909391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:45.909435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:45.909450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:45.909465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:45.909478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:45.909492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:45.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:45.909522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:45.909533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:45.909547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 08:26:45 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:58.874 08:26:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:58.874 [2024-11-20 08:26:46.305505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:58.874 [2024-11-20 08:26:46.307750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:46.307793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:46.307811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:46.307831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:46.307845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:46.307858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:46.307873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:46.307884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:46.307898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.874 [2024-11-20 08:26:46.307911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.874 [2024-11-20 08:26:46.307924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.874 [2024-11-20 08:26:46.307936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:59.133 08:26:46 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:59.133 08:26:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:59.133 08:26:46 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.133 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.390 08:26:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.612 08:26:58 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:11.612 08:26:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.612 08:26:58 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.612 08:26:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.612 08:26:58 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:11.612 08:26:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.612 [2024-11-20 08:26:58.985097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:11.612 [2024-11-20 08:26:58.987396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.612 [2024-11-20 08:26:58.987443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.612 [2024-11-20 08:26:58.987461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.612 [2024-11-20 08:26:58.987485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.613 [2024-11-20 08:26:58.987498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.613 [2024-11-20 08:26:58.987512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.613 [2024-11-20 08:26:58.987525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.613 [2024-11-20 08:26:58.987538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.613 [2024-11-20 08:26:58.987550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.613 [2024-11-20 08:26:58.987565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.613 [2024-11-20 08:26:58.987576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.613 [2024-11-20 08:26:58.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.613 08:26:59 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:11.613 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:11.613 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:11.871 [2024-11-20 08:26:59.384431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:11.871 [2024-11-20 08:26:59.386673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.871 [2024-11-20 08:26:59.386715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.871 [2024-11-20 08:26:59.386737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.871 [2024-11-20 08:26:59.386756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.871 [2024-11-20 08:26:59.386770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.871 [2024-11-20 08:26:59.386782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.871 [2024-11-20 08:26:59.386797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.871 [2024-11-20 08:26:59.386808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.871 [2024-11-20 08:26:59.386823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.871 [2024-11-20 08:26:59.386835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.871 [2024-11-20 08:26:59.386848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.871 [2024-11-20 08:26:59.386860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:12.129 08:26:59 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:12.129 08:26:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:12.129 08:26:59 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.129 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:12.388 08:26:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:24.623 08:27:11 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:24.623 08:27:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:24.623 08:27:11 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:24.623 [2024-11-20 08:27:11.964201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:24.623 [2024-11-20 08:27:11.966900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.623 [2024-11-20 08:27:11.967069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.623 [2024-11-20 08:27:11.967185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.623 [2024-11-20 08:27:11.967303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.623 [2024-11-20 08:27:11.967339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.623 [2024-11-20 08:27:11.967471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.623 [2024-11-20 08:27:11.967579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.623 [2024-11-20 08:27:11.967619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.623 [2024-11-20 08:27:11.967712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.623 [2024-11-20 08:27:11.967770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.623 [2024-11-20 08:27:11.967835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.623 [2024-11-20 08:27:11.967925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:24.623 08:27:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:24.623 08:27:12 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:24.623 08:27:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:24.623 08:27:12 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:24.623 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:24.881 [2024-11-20 08:27:12.363588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:24.881 [2024-11-20 08:27:12.366300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.882 [2024-11-20 08:27:12.366479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.882 [2024-11-20 08:27:12.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.882 [2024-11-20 08:27:12.366537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.882 [2024-11-20 08:27:12.366552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.882 [2024-11-20 08:27:12.366564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.882 [2024-11-20 08:27:12.366580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.882 [2024-11-20 08:27:12.366591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.882 [2024-11-20 08:27:12.366609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.882 [2024-11-20 08:27:12.366622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:24.882 [2024-11-20 08:27:12.366636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.882 [2024-11-20 08:27:12.366648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.140 08:27:12 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:25.140 08:27:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.140 08:27:12 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:25.140 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.409 08:27:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@722 -- # time=45.16 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@723 -- # echo 45.16 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@725 -- # return 0 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:14:37.701 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:14:37.701 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 08:27:24 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:37.701 08:27:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@712 -- # local cmd_es=0 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@714 -- # [[ -t 0 ]] 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@714 -- # exec 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@716 -- # local time=0 TIMEFORMAT=%2R 00:14:37.701 08:27:25 sw_hotplug -- common/autotest_common.sh@722 -- # remove_attach_helper 3 6 true 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:37.701 08:27:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.259 [2024-11-20 08:27:31.100669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:44.259 [2024-11-20 08:27:31.102289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.259 [2024-11-20 08:27:31.102339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.259 [2024-11-20 08:27:31.102357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.259 [2024-11-20 08:27:31.102382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.259 [2024-11-20 08:27:31.102397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.259 [2024-11-20 08:27:31.102412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.259 [2024-11-20 08:27:31.102433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.259 [2024-11-20 08:27:31.102450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.259 [2024-11-20 08:27:31.102470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.259 [2024-11-20 08:27:31.102486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.259 [2024-11-20 08:27:31.102506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.259 [2024-11-20 08:27:31.102524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.259 08:27:31 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:44.259 08:27:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:44.259 [2024-11-20 08:27:31.699710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:44.259 [2024-11-20 08:27:31.701490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.260 [2024-11-20 08:27:31.701636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.260 [2024-11-20 08:27:31.701664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.260 [2024-11-20 08:27:31.701686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.260 [2024-11-20 08:27:31.701701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.260 [2024-11-20 08:27:31.701713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.260 [2024-11-20 08:27:31.701729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.260 [2024-11-20 08:27:31.701740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.260 [2024-11-20 08:27:31.701754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.260 [2024-11-20 08:27:31.701768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.260 [2024-11-20 08:27:31.701781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.260 [2024-11-20 08:27:31.701793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.826 08:27:32 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:44.826 08:27:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.826 08:27:32 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.826 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:45.084 08:27:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.303 08:27:44 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:57.303 08:27:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.303 08:27:44 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:57.303 [2024-11-20 08:27:44.678842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:57.303 [2024-11-20 08:27:44.681425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.303 [2024-11-20 08:27:44.681569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.303 [2024-11-20 08:27:44.681735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.303 [2024-11-20 08:27:44.681805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.303 [2024-11-20 08:27:44.681883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.303 [2024-11-20 08:27:44.681942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.303 [2024-11-20 08:27:44.682055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.303 [2024-11-20 08:27:44.682097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.303 [2024-11-20 08:27:44.682148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.303 [2024-11-20 08:27:44.682244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.303 [2024-11-20 08:27:44.682278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.303 [2024-11-20 08:27:44.682343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:57.303 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.304 08:27:44 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:57.304 08:27:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.304 08:27:44 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:57.304 08:27:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:57.563 [2024-11-20 08:27:45.078185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:57.563 [2024-11-20 08:27:45.079904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.563 [2024-11-20 08:27:45.080048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.563 [2024-11-20 08:27:45.080215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.563 [2024-11-20 08:27:45.080277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.563 [2024-11-20 08:27:45.080368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.563 [2024-11-20 08:27:45.080424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.563 [2024-11-20 08:27:45.080516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.563 [2024-11-20 08:27:45.080553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.563 [2024-11-20 08:27:45.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.563 [2024-11-20 08:27:45.080700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.563 [2024-11-20 08:27:45.080888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.563 [2024-11-20 08:27:45.080941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.822 08:27:45 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:57.822 08:27:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.822 08:27:45 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:57.822 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.081 08:27:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.290 [2024-11-20 08:27:57.757781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.290 [2024-11-20 08:27:57.760036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.290 [2024-11-20 08:27:57.760075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.290 [2024-11-20 08:27:57.760092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.290 [2024-11-20 08:27:57.760118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.290 [2024-11-20 08:27:57.760130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.290 [2024-11-20 08:27:57.760145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.290 [2024-11-20 08:27:57.760158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.290 [2024-11-20 08:27:57.760176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.290 [2024-11-20 08:27:57.760189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.290 [2024-11-20 08:27:57.760204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.290 [2024-11-20 08:27:57.760215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.290 [2024-11-20 08:27:57.760230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.290 08:27:57 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:10.290 08:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:10.858 [2024-11-20 08:27:58.157137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:10.858 [2024-11-20 08:27:58.158708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.858 [2024-11-20 08:27:58.158750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.858 [2024-11-20 08:27:58.158770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.858 [2024-11-20 08:27:58.158791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.858 [2024-11-20 08:27:58.158805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.858 [2024-11-20 08:27:58.158817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.858 [2024-11-20 08:27:58.158833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.858 [2024-11-20 08:27:58.158843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.858 [2024-11-20 08:27:58.158857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.858 [2024-11-20 08:27:58.158871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.858 [2024-11-20 08:27:58.158888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.858 [2024-11-20 08:27:58.158900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.858 08:27:58 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:15:10.858 08:27:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.858 08:27:58 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:10.858 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:11.117 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:11.376 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:11.376 08:27:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@566 -- # xtrace_disable 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@722 -- # time=45.72 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@723 -- # echo 45.72 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@725 -- # return 0 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:15:23.584 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:23.584 08:28:10 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68358 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@957 -- # '[' -z 68358 ']' 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@961 -- # kill -0 68358 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@962 -- # uname 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 68358 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@975 -- # echo 'killing process with pid 68358' 00:15:23.584 killing process with pid 68358 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@976 -- # kill 68358 00:15:23.584 08:28:10 sw_hotplug -- common/autotest_common.sh@981 -- # wait 68358 00:15:26.121 08:28:13 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:26.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:26.949 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:26.949 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:26.949 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.949 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.949 00:15:26.949 real 2m34.149s 00:15:26.949 user 1m51.748s 00:15:26.949 sys 0m22.689s 00:15:26.949 08:28:14 sw_hotplug -- common/autotest_common.sh@1133 -- # xtrace_disable 00:15:26.949 ************************************ 00:15:26.949 END TEST sw_hotplug 00:15:26.949 ************************************ 00:15:26.949 08:28:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:27.209 08:28:14 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:27.209 08:28:14 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:27.209 08:28:14 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:15:27.209 08:28:14 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:27.209 08:28:14 -- common/autotest_common.sh@10 -- # set +x 00:15:27.209 ************************************ 00:15:27.209 START TEST nvme_xnvme 00:15:27.209 ************************************ 00:15:27.209 08:28:14 nvme_xnvme -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:27.209 * Looking for test storage... 00:15:27.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:27.209 08:28:14 nvme_xnvme -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:15:27.209 08:28:14 nvme_xnvme -- common/autotest_common.sh@1638 -- # lcov --version 00:15:27.209 08:28:14 nvme_xnvme -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:15:27.468 08:28:14 nvme_xnvme -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.468 08:28:14 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:27.468 08:28:14 nvme_xnvme -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.468 08:28:14 nvme_xnvme -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:15:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.468 --rc genhtml_branch_coverage=1 00:15:27.468 --rc genhtml_function_coverage=1 00:15:27.468 --rc genhtml_legend=1 00:15:27.468 --rc geninfo_all_blocks=1 00:15:27.468 --rc geninfo_unexecuted_blocks=1 00:15:27.468 00:15:27.468 ' 00:15:27.468 08:28:14 nvme_xnvme -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:15:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.468 --rc genhtml_branch_coverage=1 00:15:27.468 --rc genhtml_function_coverage=1 00:15:27.468 --rc genhtml_legend=1 00:15:27.468 --rc geninfo_all_blocks=1 00:15:27.468 --rc geninfo_unexecuted_blocks=1 00:15:27.468 00:15:27.468 ' 00:15:27.468 08:28:14 nvme_xnvme -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:15:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.468 --rc genhtml_branch_coverage=1 00:15:27.469 --rc genhtml_function_coverage=1 00:15:27.469 --rc genhtml_legend=1 00:15:27.469 --rc geninfo_all_blocks=1 00:15:27.469 --rc geninfo_unexecuted_blocks=1 00:15:27.469 00:15:27.469 ' 00:15:27.469 08:28:14 nvme_xnvme -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:15:27.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.469 --rc genhtml_branch_coverage=1 00:15:27.469 --rc genhtml_function_coverage=1 00:15:27.469 --rc genhtml_legend=1 00:15:27.469 --rc geninfo_all_blocks=1 00:15:27.469 --rc geninfo_unexecuted_blocks=1 00:15:27.469 00:15:27.469 ' 00:15:27.469 08:28:14 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.469 08:28:14 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.469 08:28:14 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.469 08:28:14 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.469 08:28:14 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.469 08:28:14 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.469 08:28:14 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.469 08:28:14 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.469 08:28:14 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:27.469 08:28:14 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.469 08:28:14 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:27.469 08:28:14 nvme_xnvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:15:27.469 08:28:14 nvme_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:27.469 08:28:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.469 ************************************ 00:15:27.469 START TEST xnvme_to_malloc_dd_copy 00:15:27.469 ************************************ 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1132 -- # malloc_to_xnvme_copy 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:27.469 08:28:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:27.469 { 00:15:27.469 "subsystems": [ 00:15:27.469 { 00:15:27.469 "subsystem": "bdev", 00:15:27.469 "config": [ 00:15:27.469 { 00:15:27.469 "params": { 00:15:27.469 "block_size": 512, 00:15:27.469 "num_blocks": 2097152, 00:15:27.469 "name": "malloc0" 00:15:27.469 }, 00:15:27.469 "method": "bdev_malloc_create" 00:15:27.469 }, 00:15:27.469 { 00:15:27.469 "params": { 00:15:27.469 "io_mechanism": "libaio", 00:15:27.469 "filename": "/dev/nullb0", 00:15:27.469 "name": "null0" 00:15:27.469 }, 00:15:27.469 "method": "bdev_xnvme_create" 00:15:27.469 }, 00:15:27.469 { 00:15:27.469 "method": "bdev_wait_for_examine" 00:15:27.469 } 00:15:27.469 ] 00:15:27.469 } 00:15:27.469 ] 00:15:27.469 } 00:15:27.469 [2024-11-20 08:28:14.980127] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:27.469 [2024-11-20 08:28:14.980265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69745 ] 00:15:27.727 [2024-11-20 08:28:15.160203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.727 [2024-11-20 08:28:15.265827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.263  [2024-11-20T08:28:18.761Z] Copying: 269/1024 [MB] (269 MBps) [2024-11-20T08:28:19.699Z] Copying: 531/1024 [MB] (262 MBps) [2024-11-20T08:28:20.636Z] Copying: 799/1024 [MB] (267 MBps) [2024-11-20T08:28:24.833Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:15:37.272 00:15:37.272 08:28:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:37.272 08:28:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:37.272 08:28:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:37.272 08:28:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:37.272 { 00:15:37.272 "subsystems": [ 00:15:37.272 { 00:15:37.272 "subsystem": "bdev", 00:15:37.272 "config": [ 00:15:37.272 { 00:15:37.272 "params": { 00:15:37.272 "block_size": 512, 00:15:37.272 "num_blocks": 2097152, 00:15:37.272 "name": "malloc0" 00:15:37.272 }, 00:15:37.272 "method": "bdev_malloc_create" 00:15:37.272 }, 00:15:37.272 { 00:15:37.272 "params": { 00:15:37.272 "io_mechanism": "libaio", 00:15:37.272 "filename": "/dev/nullb0", 00:15:37.272 "name": "null0" 00:15:37.272 }, 00:15:37.272 "method": "bdev_xnvme_create" 00:15:37.272 }, 00:15:37.272 { 00:15:37.272 "method": "bdev_wait_for_examine" 00:15:37.272 } 00:15:37.272 ] 00:15:37.272 } 00:15:37.272 ] 00:15:37.272 } 00:15:37.272 [2024-11-20 08:28:24.454318] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:37.272 [2024-11-20 08:28:24.454440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69860 ] 00:15:37.272 [2024-11-20 08:28:24.636487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.272 [2024-11-20 08:28:24.752816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.837  [2024-11-20T08:28:28.334Z] Copying: 262/1024 [MB] (262 MBps) [2024-11-20T08:28:29.272Z] Copying: 528/1024 [MB] (265 MBps) [2024-11-20T08:28:30.207Z] Copying: 795/1024 [MB] (267 MBps) [2024-11-20T08:28:34.401Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:15:46.840 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:46.840 08:28:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:46.840 { 00:15:46.840 "subsystems": [ 00:15:46.840 { 00:15:46.840 "subsystem": "bdev", 00:15:46.840 "config": [ 00:15:46.840 { 00:15:46.840 "params": { 00:15:46.840 "block_size": 512, 00:15:46.840 "num_blocks": 2097152, 00:15:46.840 "name": "malloc0" 00:15:46.840 }, 00:15:46.840 "method": "bdev_malloc_create" 00:15:46.840 }, 00:15:46.840 { 00:15:46.840 "params": { 00:15:46.840 "io_mechanism": "io_uring", 00:15:46.840 "filename": "/dev/nullb0", 00:15:46.840 "name": "null0" 00:15:46.840 }, 00:15:46.840 "method": "bdev_xnvme_create" 00:15:46.840 }, 00:15:46.840 { 00:15:46.840 "method": "bdev_wait_for_examine" 00:15:46.840 } 00:15:46.840 ] 00:15:46.840 } 00:15:46.840 ] 00:15:46.840 } 00:15:46.840 [2024-11-20 08:28:33.958222] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:46.840 [2024-11-20 08:28:33.958364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69965 ] 00:15:46.840 [2024-11-20 08:28:34.137907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.840 [2024-11-20 08:28:34.255585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.380  [2024-11-20T08:28:37.878Z] Copying: 271/1024 [MB] (271 MBps) [2024-11-20T08:28:38.814Z] Copying: 546/1024 [MB] (274 MBps) [2024-11-20T08:28:39.755Z] Copying: 821/1024 [MB] (275 MBps) [2024-11-20T08:28:43.949Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:15:56.388 00:15:56.388 08:28:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:56.388 08:28:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:56.388 08:28:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:56.388 08:28:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:56.388 { 00:15:56.388 "subsystems": [ 00:15:56.388 { 00:15:56.388 "subsystem": "bdev", 00:15:56.388 "config": [ 00:15:56.388 { 00:15:56.388 "params": { 00:15:56.388 "block_size": 512, 00:15:56.388 "num_blocks": 2097152, 00:15:56.388 "name": "malloc0" 00:15:56.388 }, 00:15:56.388 "method": "bdev_malloc_create" 00:15:56.388 }, 00:15:56.388 { 00:15:56.388 "params": { 00:15:56.388 "io_mechanism": "io_uring", 00:15:56.388 "filename": "/dev/nullb0", 00:15:56.388 "name": "null0" 00:15:56.388 }, 00:15:56.388 "method": "bdev_xnvme_create" 00:15:56.388 }, 00:15:56.388 { 00:15:56.388 "method": "bdev_wait_for_examine" 00:15:56.388 } 00:15:56.388 ] 00:15:56.388 } 00:15:56.388 ] 00:15:56.388 } 00:15:56.388 [2024-11-20 08:28:43.355370] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:56.388 [2024-11-20 08:28:43.355506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70078 ] 00:15:56.388 [2024-11-20 08:28:43.537468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.388 [2024-11-20 08:28:43.647430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.924  [2024-11-20T08:28:47.052Z] Copying: 277/1024 [MB] (277 MBps) [2024-11-20T08:28:48.454Z] Copying: 554/1024 [MB] (277 MBps) [2024-11-20T08:28:48.713Z] Copying: 832/1024 [MB] (278 MBps) [2024-11-20T08:28:52.906Z] Copying: 1024/1024 [MB] (average 278 MBps) 00:16:05.345 00:16:05.345 08:28:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:05.345 08:28:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:05.345 00:16:05.345 real 0m37.769s 00:16:05.345 user 0m32.996s 00:16:05.345 sys 0m4.286s 00:16:05.345 08:28:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:05.345 08:28:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:05.345 ************************************ 00:16:05.345 END TEST xnvme_to_malloc_dd_copy 00:16:05.345 ************************************ 00:16:05.345 08:28:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:05.345 08:28:52 nvme_xnvme -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:16:05.345 08:28:52 nvme_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:05.345 08:28:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.345 ************************************ 00:16:05.345 START TEST xnvme_bdevperf 00:16:05.345 ************************************ 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1132 -- # xnvme_bdevperf 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:05.345 08:28:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:05.345 { 00:16:05.345 "subsystems": [ 00:16:05.345 { 00:16:05.345 "subsystem": "bdev", 00:16:05.345 "config": [ 00:16:05.345 { 00:16:05.345 "params": { 00:16:05.345 "io_mechanism": "libaio", 00:16:05.345 "filename": "/dev/nullb0", 00:16:05.345 "name": "null0" 00:16:05.345 }, 00:16:05.345 "method": "bdev_xnvme_create" 00:16:05.345 }, 00:16:05.345 { 00:16:05.345 "method": "bdev_wait_for_examine" 00:16:05.345 } 00:16:05.345 ] 00:16:05.345 } 00:16:05.345 ] 00:16:05.345 } 00:16:05.345 [2024-11-20 08:28:52.825931] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:05.345 [2024-11-20 08:28:52.826059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:16:05.604 [2024-11-20 08:28:53.007301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.604 [2024-11-20 08:28:53.119877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.172 Running I/O for 5 seconds... 00:16:08.048 158336.00 IOPS, 618.50 MiB/s [2024-11-20T08:28:56.545Z] 156800.00 IOPS, 612.50 MiB/s [2024-11-20T08:28:57.485Z] 156330.67 IOPS, 610.67 MiB/s [2024-11-20T08:28:58.864Z] 156080.00 IOPS, 609.69 MiB/s [2024-11-20T08:28:58.864Z] 155942.40 IOPS, 609.15 MiB/s 00:16:11.303 Latency(us) 00:16:11.303 [2024-11-20T08:28:58.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.303 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:11.303 null0 : 5.00 155891.91 608.95 0.00 0.00 408.15 366.83 1789.74 00:16:11.303 [2024-11-20T08:28:58.864Z] =================================================================================================================== 00:16:11.303 [2024-11-20T08:28:58.864Z] Total : 155891.91 608.95 0.00 0.00 408.15 366.83 1789.74 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:12.237 08:28:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:12.237 { 00:16:12.237 "subsystems": [ 00:16:12.237 { 00:16:12.237 "subsystem": "bdev", 00:16:12.237 "config": [ 00:16:12.237 { 00:16:12.237 "params": { 00:16:12.237 "io_mechanism": "io_uring", 00:16:12.237 "filename": "/dev/nullb0", 00:16:12.237 "name": "null0" 00:16:12.237 }, 00:16:12.237 "method": "bdev_xnvme_create" 00:16:12.237 }, 00:16:12.237 { 00:16:12.237 "method": "bdev_wait_for_examine" 00:16:12.237 } 00:16:12.237 ] 00:16:12.237 } 00:16:12.237 ] 00:16:12.237 } 00:16:12.237 [2024-11-20 08:28:59.666900] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:12.237 [2024-11-20 08:28:59.667040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70286 ] 00:16:12.496 [2024-11-20 08:28:59.844735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.496 [2024-11-20 08:28:59.951673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.755 Running I/O for 5 seconds... 00:16:15.068 201472.00 IOPS, 787.00 MiB/s [2024-11-20T08:29:03.563Z] 201280.00 IOPS, 786.25 MiB/s [2024-11-20T08:29:04.496Z] 201258.67 IOPS, 786.17 MiB/s [2024-11-20T08:29:05.496Z] 201216.00 IOPS, 786.00 MiB/s 00:16:17.935 Latency(us) 00:16:17.935 [2024-11-20T08:29:05.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.935 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:17.935 null0 : 5.00 201236.85 786.08 0.00 0.00 315.79 187.53 1697.62 00:16:17.935 [2024-11-20T08:29:05.496Z] =================================================================================================================== 00:16:17.935 [2024-11-20T08:29:05.496Z] Total : 201236.85 786.08 0.00 0.00 315.79 187.53 1697.62 00:16:18.873 08:29:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:18.873 08:29:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:18.873 00:16:18.873 real 0m13.708s 00:16:18.873 user 0m10.276s 00:16:18.873 sys 0m3.246s 00:16:18.873 08:29:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:18.873 08:29:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:18.873 ************************************ 00:16:18.873 END TEST xnvme_bdevperf 00:16:18.873 ************************************ 00:16:19.133 00:16:19.133 real 0m51.907s 00:16:19.133 user 0m43.463s 00:16:19.133 sys 0m7.774s 00:16:19.133 ************************************ 00:16:19.133 END TEST nvme_xnvme 00:16:19.133 ************************************ 00:16:19.133 08:29:06 nvme_xnvme -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:19.133 08:29:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.133 08:29:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:19.133 08:29:06 -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:16:19.133 08:29:06 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:19.133 08:29:06 -- common/autotest_common.sh@10 -- # set +x 00:16:19.133 ************************************ 00:16:19.133 START TEST blockdev_xnvme 00:16:19.133 ************************************ 00:16:19.133 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:19.392 * Looking for test storage... 00:16:19.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1638 -- # lcov --version 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.392 08:29:06 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:16:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.392 --rc genhtml_branch_coverage=1 00:16:19.392 --rc genhtml_function_coverage=1 00:16:19.392 --rc genhtml_legend=1 00:16:19.392 --rc geninfo_all_blocks=1 00:16:19.392 --rc geninfo_unexecuted_blocks=1 00:16:19.392 00:16:19.392 ' 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:16:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.392 --rc genhtml_branch_coverage=1 00:16:19.392 --rc genhtml_function_coverage=1 00:16:19.392 --rc genhtml_legend=1 00:16:19.392 --rc geninfo_all_blocks=1 00:16:19.392 --rc geninfo_unexecuted_blocks=1 00:16:19.392 00:16:19.392 ' 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:16:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.392 --rc genhtml_branch_coverage=1 00:16:19.392 --rc genhtml_function_coverage=1 00:16:19.392 --rc genhtml_legend=1 00:16:19.392 --rc geninfo_all_blocks=1 00:16:19.392 --rc geninfo_unexecuted_blocks=1 00:16:19.392 00:16:19.392 ' 00:16:19.392 08:29:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:16:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.392 --rc genhtml_branch_coverage=1 00:16:19.392 --rc genhtml_function_coverage=1 00:16:19.392 --rc genhtml_legend=1 00:16:19.392 --rc geninfo_all_blocks=1 00:16:19.392 --rc geninfo_unexecuted_blocks=1 00:16:19.392 00:16:19.392 ' 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:19.392 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70445 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:19.393 08:29:06 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70445 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@838 -- # '[' -z 70445 ']' 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:19.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:19.393 08:29:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.393 [2024-11-20 08:29:06.923541] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:19.393 [2024-11-20 08:29:06.923672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70445 ] 00:16:19.652 [2024-11-20 08:29:07.103521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.912 [2024-11-20 08:29:07.214273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.851 08:29:08 blockdev_xnvme -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:20.851 08:29:08 blockdev_xnvme -- common/autotest_common.sh@871 -- # return 0 00:16:20.851 08:29:08 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:20.851 08:29:08 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:20.851 08:29:08 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:20.851 08:29:08 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:20.851 08:29:08 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:21.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:21.370 Waiting for block devices as requested 00:16:21.370 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.630 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.630 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.889 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:27.166 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1602 -- # zoned_devs=() 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1602 -- # local -gA zoned_devs 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1603 -- # local nvme bdf 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme0n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme2n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n2 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme2n2 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme2n3 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme2n3 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3c3n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme3c3n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1606 -- # is_block_zoned nvme3n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1595 -- # local device=nvme3n1 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:27.166 08:29:14 blockdev_xnvme -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:27.166 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:27.167 nvme0n1 00:16:27.167 nvme1n1 00:16:27.167 nvme2n1 00:16:27.167 nvme2n2 00:16:27.167 nvme2n3 00:16:27.167 nvme3n1 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d7662835-966d-448f-b240-6cc51a353f7b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d7662835-966d-448f-b240-6cc51a353f7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e89abd58-4a05-4d6c-acd6-f1cd07a4e874"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e89abd58-4a05-4d6c-acd6-f1cd07a4e874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "325c66db-ca35-41a6-992e-65d9d53982a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "325c66db-ca35-41a6-992e-65d9d53982a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "761e4519-6d1f-41f3-8567-54c10d161681"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "761e4519-6d1f-41f3-8567-54c10d161681",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d34da384-7897-44b8-93c2-44e5c19765aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d34da384-7897-44b8-93c2-44e5c19765aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "84b3ddd1-c98e-4f66-a61d-1632b0c3f577"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "84b3ddd1-c98e-4f66-a61d-1632b0c3f577",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:27.167 08:29:14 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70445 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' -z 70445 ']' 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@961 -- # kill -0 70445 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@962 -- # uname 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:27.167 08:29:14 blockdev_xnvme -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 70445 00:16:27.427 08:29:14 blockdev_xnvme -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:16:27.427 08:29:14 blockdev_xnvme -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:16:27.427 killing process with pid 70445 00:16:27.427 08:29:14 blockdev_xnvme -- common/autotest_common.sh@975 -- # echo 'killing process with pid 70445' 00:16:27.427 08:29:14 blockdev_xnvme -- common/autotest_common.sh@976 -- # kill 70445 00:16:27.427 08:29:14 blockdev_xnvme -- common/autotest_common.sh@981 -- # wait 70445 00:16:29.961 08:29:17 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:29.961 08:29:17 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:29.961 08:29:17 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 7 -le 1 ']' 00:16:29.961 08:29:17 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:29.961 08:29:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.961 ************************************ 00:16:29.961 START TEST bdev_hello_world 00:16:29.961 ************************************ 00:16:29.961 08:29:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:29.961 [2024-11-20 08:29:17.103425] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:29.961 [2024-11-20 08:29:17.103564] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:16:29.961 [2024-11-20 08:29:17.282547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.961 [2024-11-20 08:29:17.390294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.529 [2024-11-20 08:29:17.812614] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:30.529 [2024-11-20 08:29:17.812660] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:30.529 [2024-11-20 08:29:17.812678] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:30.529 [2024-11-20 08:29:17.814735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:30.529 [2024-11-20 08:29:17.815208] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:30.529 [2024-11-20 08:29:17.815236] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:30.529 [2024-11-20 08:29:17.815485] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:30.529 00:16:30.529 [2024-11-20 08:29:17.815509] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:31.466 00:16:31.467 real 0m1.841s 00:16:31.467 user 0m1.500s 00:16:31.467 sys 0m0.225s 00:16:31.467 08:29:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:31.467 08:29:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:31.467 ************************************ 00:16:31.467 END TEST bdev_hello_world 00:16:31.467 ************************************ 00:16:31.467 08:29:18 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:31.467 08:29:18 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:16:31.467 08:29:18 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:31.467 08:29:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.467 ************************************ 00:16:31.467 START TEST bdev_bounds 00:16:31.467 ************************************ 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1132 -- # bdev_bounds '' 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70863 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:31.467 Process bdevio pid: 70863 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70863' 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70863 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # '[' -z 70863 ']' 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:31.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:31.467 08:29:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:31.725 [2024-11-20 08:29:19.033113] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:31.726 [2024-11-20 08:29:19.033259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70863 ] 00:16:31.726 [2024-11-20 08:29:19.213157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.984 [2024-11-20 08:29:19.326114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.984 [2024-11-20 08:29:19.326277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.984 [2024-11-20 08:29:19.326344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.550 08:29:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:32.550 08:29:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@871 -- # return 0 00:16:32.550 08:29:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:32.550 I/O targets: 00:16:32.550 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:32.550 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:32.550 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:32.550 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:32.550 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:32.550 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:32.550 00:16:32.550 00:16:32.550 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.550 http://cunit.sourceforge.net/ 00:16:32.550 00:16:32.550 00:16:32.550 Suite: bdevio tests on: nvme3n1 00:16:32.550 Test: blockdev write read block ...passed 00:16:32.550 Test: blockdev write zeroes read block ...passed 00:16:32.550 Test: blockdev write zeroes read no split ...passed 00:16:32.550 Test: blockdev write zeroes read split ...passed 00:16:32.550 Test: blockdev write zeroes read split partial ...passed 00:16:32.550 Test: blockdev reset ...passed 00:16:32.550 Test: blockdev write read 8 blocks ...passed 00:16:32.550 Test: blockdev write read size > 128k ...passed 00:16:32.550 Test: blockdev write read invalid size ...passed 00:16:32.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.550 Test: blockdev write read max offset ...passed 00:16:32.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.550 Test: blockdev writev readv 8 blocks ...passed 00:16:32.550 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.550 Test: blockdev writev readv block ...passed 00:16:32.550 Test: blockdev writev readv size > 128k ...passed 00:16:32.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.550 Test: blockdev comparev and writev ...passed 00:16:32.550 Test: blockdev nvme passthru rw ...passed 00:16:32.550 Test: blockdev nvme passthru vendor specific ...passed 00:16:32.550 Test: blockdev nvme admin passthru ...passed 00:16:32.551 Test: blockdev copy ...passed 00:16:32.551 Suite: bdevio tests on: nvme2n3 00:16:32.551 Test: blockdev write read block ...passed 00:16:32.551 Test: blockdev write zeroes read block ...passed 00:16:32.551 Test: blockdev write zeroes read no split ...passed 00:16:32.551 Test: blockdev write zeroes read split ...passed 00:16:32.811 Test: blockdev write zeroes read split partial ...passed 00:16:32.811 Test: blockdev reset ...passed 00:16:32.811 Test: blockdev write read 8 blocks ...passed 00:16:32.811 Test: blockdev write read size > 128k ...passed 00:16:32.811 Test: blockdev write read invalid size ...passed 00:16:32.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.811 Test: blockdev write read max offset ...passed 00:16:32.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.811 Test: blockdev writev readv 8 blocks ...passed 00:16:32.811 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.811 Test: blockdev writev readv block ...passed 00:16:32.811 Test: blockdev writev readv size > 128k ...passed 00:16:32.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.811 Test: blockdev comparev and writev ...passed 00:16:32.811 Test: blockdev nvme passthru rw ...passed 00:16:32.811 Test: blockdev nvme passthru vendor specific ...passed 00:16:32.811 Test: blockdev nvme admin passthru ...passed 00:16:32.811 Test: blockdev copy ...passed 00:16:32.811 Suite: bdevio tests on: nvme2n2 00:16:32.811 Test: blockdev write read block ...passed 00:16:32.811 Test: blockdev write zeroes read block ...passed 00:16:32.811 Test: blockdev write zeroes read no split ...passed 00:16:32.811 Test: blockdev write zeroes read split ...passed 00:16:32.811 Test: blockdev write zeroes read split partial ...passed 00:16:32.811 Test: blockdev reset ...passed 00:16:32.811 Test: blockdev write read 8 blocks ...passed 00:16:32.811 Test: blockdev write read size > 128k ...passed 00:16:32.811 Test: blockdev write read invalid size ...passed 00:16:32.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.811 Test: blockdev write read max offset ...passed 00:16:32.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.811 Test: blockdev writev readv 8 blocks ...passed 00:16:32.811 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.811 Test: blockdev writev readv block ...passed 00:16:32.811 Test: blockdev writev readv size > 128k ...passed 00:16:32.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.811 Test: blockdev comparev and writev ...passed 00:16:32.811 Test: blockdev nvme passthru rw ...passed 00:16:32.811 Test: blockdev nvme passthru vendor specific ...passed 00:16:32.811 Test: blockdev nvme admin passthru ...passed 00:16:32.811 Test: blockdev copy ...passed 00:16:32.811 Suite: bdevio tests on: nvme2n1 00:16:32.811 Test: blockdev write read block ...passed 00:16:32.811 Test: blockdev write zeroes read block ...passed 00:16:32.811 Test: blockdev write zeroes read no split ...passed 00:16:32.811 Test: blockdev write zeroes read split ...passed 00:16:32.811 Test: blockdev write zeroes read split partial ...passed 00:16:32.811 Test: blockdev reset ...passed 00:16:32.811 Test: blockdev write read 8 blocks ...passed 00:16:32.811 Test: blockdev write read size > 128k ...passed 00:16:32.811 Test: blockdev write read invalid size ...passed 00:16:32.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.811 Test: blockdev write read max offset ...passed 00:16:32.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.811 Test: blockdev writev readv 8 blocks ...passed 00:16:32.811 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.811 Test: blockdev writev readv block ...passed 00:16:32.811 Test: blockdev writev readv size > 128k ...passed 00:16:32.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.811 Test: blockdev comparev and writev ...passed 00:16:32.811 Test: blockdev nvme passthru rw ...passed 00:16:32.811 Test: blockdev nvme passthru vendor specific ...passed 00:16:32.811 Test: blockdev nvme admin passthru ...passed 00:16:32.811 Test: blockdev copy ...passed 00:16:32.811 Suite: bdevio tests on: nvme1n1 00:16:32.811 Test: blockdev write read block ...passed 00:16:32.811 Test: blockdev write zeroes read block ...passed 00:16:32.811 Test: blockdev write zeroes read no split ...passed 00:16:32.811 Test: blockdev write zeroes read split ...passed 00:16:32.811 Test: blockdev write zeroes read split partial ...passed 00:16:32.811 Test: blockdev reset ...passed 00:16:32.811 Test: blockdev write read 8 blocks ...passed 00:16:32.811 Test: blockdev write read size > 128k ...passed 00:16:32.811 Test: blockdev write read invalid size ...passed 00:16:32.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:33.071 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:33.071 Test: blockdev write read max offset ...passed 00:16:33.071 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:33.071 Test: blockdev writev readv 8 blocks ...passed 00:16:33.071 Test: blockdev writev readv 30 x 1block ...passed 00:16:33.071 Test: blockdev writev readv block ...passed 00:16:33.071 Test: blockdev writev readv size > 128k ...passed 00:16:33.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:33.071 Test: blockdev comparev and writev ...passed 00:16:33.071 Test: blockdev nvme passthru rw ...passed 00:16:33.071 Test: blockdev nvme passthru vendor specific ...passed 00:16:33.071 Test: blockdev nvme admin passthru ...passed 00:16:33.071 Test: blockdev copy ...passed 00:16:33.071 Suite: bdevio tests on: nvme0n1 00:16:33.071 Test: blockdev write read block ...passed 00:16:33.071 Test: blockdev write zeroes read block ...passed 00:16:33.071 Test: blockdev write zeroes read no split ...passed 00:16:33.071 Test: blockdev write zeroes read split ...passed 00:16:33.071 Test: blockdev write zeroes read split partial ...passed 00:16:33.071 Test: blockdev reset ...passed 00:16:33.071 Test: blockdev write read 8 blocks ...passed 00:16:33.071 Test: blockdev write read size > 128k ...passed 00:16:33.071 Test: blockdev write read invalid size ...passed 00:16:33.071 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:33.071 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:33.071 Test: blockdev write read max offset ...passed 00:16:33.071 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:33.071 Test: blockdev writev readv 8 blocks ...passed 00:16:33.071 Test: blockdev writev readv 30 x 1block ...passed 00:16:33.071 Test: blockdev writev readv block ...passed 00:16:33.071 Test: blockdev writev readv size > 128k ...passed 00:16:33.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:33.071 Test: blockdev comparev and writev ...passed 00:16:33.071 Test: blockdev nvme passthru rw ...passed 00:16:33.071 Test: blockdev nvme passthru vendor specific ...passed 00:16:33.071 Test: blockdev nvme admin passthru ...passed 00:16:33.071 Test: blockdev copy ...passed 00:16:33.071 00:16:33.071 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.072 suites 6 6 n/a 0 0 00:16:33.072 tests 138 138 138 0 0 00:16:33.072 asserts 780 780 780 0 n/a 00:16:33.072 00:16:33.072 Elapsed time = 1.306 seconds 00:16:33.072 0 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70863 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' -z 70863 ']' 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@961 -- # kill -0 70863 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # uname 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 70863 00:16:33.072 killing process with pid 70863 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@975 -- # echo 'killing process with pid 70863' 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # kill 70863 00:16:33.072 08:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@981 -- # wait 70863 00:16:34.455 08:29:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:34.455 00:16:34.455 real 0m2.684s 00:16:34.455 user 0m6.671s 00:16:34.455 sys 0m0.417s 00:16:34.455 08:29:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:34.455 08:29:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:34.455 ************************************ 00:16:34.455 END TEST bdev_bounds 00:16:34.455 ************************************ 00:16:34.455 08:29:21 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:34.455 08:29:21 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:16:34.455 08:29:21 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:34.455 08:29:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.455 ************************************ 00:16:34.455 START TEST bdev_nbd 00:16:34.455 ************************************ 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1132 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70917 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70917 /var/tmp/spdk-nbd.sock 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # '[' -z 70917 ']' 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:34.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:34.455 08:29:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:34.455 [2024-11-20 08:29:21.806482] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:34.455 [2024-11-20 08:29:21.806797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.455 [2024-11-20 08:29:21.989143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.714 [2024-11-20 08:29:22.098387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # return 0 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:35.282 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:35.283 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:35.283 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:35.283 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:35.283 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:35.283 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.542 1+0 records in 00:16:35.542 1+0 records out 00:16:35.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705657 s, 5.8 MB/s 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:35.542 08:29:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.801 1+0 records in 00:16:35.801 1+0 records out 00:16:35.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694118 s, 5.9 MB/s 00:16:35.801 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:35.802 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd2 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd2 /proc/partitions 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.061 1+0 records in 00:16:36.061 1+0 records out 00:16:36.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632288 s, 6.5 MB/s 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:36.061 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:36.062 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd3 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd3 /proc/partitions 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.321 1+0 records in 00:16:36.321 1+0 records out 00:16:36.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798903 s, 5.1 MB/s 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:36.321 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd4 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:36.580 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd4 /proc/partitions 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.581 1+0 records in 00:16:36.581 1+0 records out 00:16:36.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676308 s, 6.1 MB/s 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:36.581 08:29:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd5 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:36.581 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd5 /proc/partitions 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.840 1+0 records in 00:16:36.840 1+0 records out 00:16:36.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746646 s, 5.5 MB/s 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd0", 00:16:36.840 "bdev_name": "nvme0n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd1", 00:16:36.840 "bdev_name": "nvme1n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd2", 00:16:36.840 "bdev_name": "nvme2n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd3", 00:16:36.840 "bdev_name": "nvme2n2" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd4", 00:16:36.840 "bdev_name": "nvme2n3" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd5", 00:16:36.840 "bdev_name": "nvme3n1" 00:16:36.840 } 00:16:36.840 ]' 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd0", 00:16:36.840 "bdev_name": "nvme0n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd1", 00:16:36.840 "bdev_name": "nvme1n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd2", 00:16:36.840 "bdev_name": "nvme2n1" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd3", 00:16:36.840 "bdev_name": "nvme2n2" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd4", 00:16:36.840 "bdev_name": "nvme2n3" 00:16:36.840 }, 00:16:36.840 { 00:16:36.840 "nbd_device": "/dev/nbd5", 00:16:36.840 "bdev_name": "nvme3n1" 00:16:36.840 } 00:16:36.840 ]' 00:16:36.840 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.099 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.358 08:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.618 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.878 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.138 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:38.398 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:38.658 08:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:38.658 /dev/nbd0 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:38.658 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:38.918 1+0 records in 00:16:38.918 1+0 records out 00:16:38.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488124 s, 8.4 MB/s 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:16:38.918 /dev/nbd1 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:38.918 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:38.918 1+0 records in 00:16:38.918 1+0 records out 00:16:38.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088641 s, 4.6 MB/s 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:39.178 /dev/nbd10 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd10 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd10 /proc/partitions 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.178 1+0 records in 00:16:39.178 1+0 records out 00:16:39.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716796 s, 5.7 MB/s 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:39.178 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:39.437 /dev/nbd11 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd11 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd11 /proc/partitions 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.437 1+0 records in 00:16:39.437 1+0 records out 00:16:39.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744413 s, 5.5 MB/s 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:39.437 08:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:39.696 /dev/nbd12 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd12 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd12 /proc/partitions 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.696 1+0 records in 00:16:39.696 1+0 records out 00:16:39.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743429 s, 5.5 MB/s 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:39.696 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:39.954 /dev/nbd13 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # local nbd_name=nbd13 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # local i 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@879 -- # grep -q -w nbd13 /proc/partitions 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # break 00:16:39.954 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.955 1+0 records in 00:16:39.955 1+0 records out 00:16:39.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767227 s, 5.3 MB/s 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # size=4096 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@896 -- # return 0 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.955 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd0", 00:16:40.213 "bdev_name": "nvme0n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd1", 00:16:40.213 "bdev_name": "nvme1n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd10", 00:16:40.213 "bdev_name": "nvme2n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd11", 00:16:40.213 "bdev_name": "nvme2n2" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd12", 00:16:40.213 "bdev_name": "nvme2n3" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd13", 00:16:40.213 "bdev_name": "nvme3n1" 00:16:40.213 } 00:16:40.213 ]' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd0", 00:16:40.213 "bdev_name": "nvme0n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd1", 00:16:40.213 "bdev_name": "nvme1n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd10", 00:16:40.213 "bdev_name": "nvme2n1" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd11", 00:16:40.213 "bdev_name": "nvme2n2" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd12", 00:16:40.213 "bdev_name": "nvme2n3" 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "nbd_device": "/dev/nbd13", 00:16:40.213 "bdev_name": "nvme3n1" 00:16:40.213 } 00:16:40.213 ]' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:40.213 /dev/nbd1 00:16:40.213 /dev/nbd10 00:16:40.213 /dev/nbd11 00:16:40.213 /dev/nbd12 00:16:40.213 /dev/nbd13' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:40.213 /dev/nbd1 00:16:40.213 /dev/nbd10 00:16:40.213 /dev/nbd11 00:16:40.213 /dev/nbd12 00:16:40.213 /dev/nbd13' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:40.213 256+0 records in 00:16:40.213 256+0 records out 00:16:40.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012286 s, 85.3 MB/s 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.213 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:40.472 256+0 records in 00:16:40.472 256+0 records out 00:16:40.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124097 s, 8.4 MB/s 00:16:40.472 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.472 08:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:40.732 256+0 records in 00:16:40.732 256+0 records out 00:16:40.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153955 s, 6.8 MB/s 00:16:40.732 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.732 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:40.732 256+0 records in 00:16:40.732 256+0 records out 00:16:40.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125966 s, 8.3 MB/s 00:16:40.732 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.732 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:40.991 256+0 records in 00:16:40.991 256+0 records out 00:16:40.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125146 s, 8.4 MB/s 00:16:40.991 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.991 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:40.991 256+0 records in 00:16:40.991 256+0 records out 00:16:40.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125062 s, 8.4 MB/s 00:16:40.991 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.991 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:41.250 256+0 records in 00:16:41.250 256+0 records out 00:16:41.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127778 s, 8.2 MB/s 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:41.250 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.251 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:41.251 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.251 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:41.251 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.251 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.509 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:41.510 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.510 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.510 08:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.510 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:41.768 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:41.768 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:41.768 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:41.768 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.769 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:42.027 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.028 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.287 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.546 08:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:42.805 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:43.065 malloc_lvol_verify 00:16:43.065 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:43.065 6c36f08c-384b-432c-a289-cc245b95eede 00:16:43.065 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:43.324 f2bf5eaa-60cb-45f2-8674-ecb79ead0dd8 00:16:43.324 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:43.584 /dev/nbd0 00:16:43.584 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:43.584 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:43.584 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:43.584 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:43.584 08:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:43.584 mke2fs 1.47.0 (5-Feb-2023) 00:16:43.584 Discarding device blocks: 0/4096 done 00:16:43.584 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:43.584 00:16:43.584 Allocating group tables: 0/1 done 00:16:43.584 Writing inode tables: 0/1 done 00:16:43.584 Creating journal (1024 blocks): done 00:16:43.584 Writing superblocks and filesystem accounting information: 0/1 done 00:16:43.584 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.584 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70917 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' -z 70917 ']' 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@961 -- # kill -0 70917 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # uname 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 70917 00:16:43.843 killing process with pid 70917 00:16:43.843 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:16:43.844 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:16:43.844 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@975 -- # echo 'killing process with pid 70917' 00:16:43.844 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # kill 70917 00:16:43.844 08:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@981 -- # wait 70917 00:16:44.882 ************************************ 00:16:44.882 END TEST bdev_nbd 00:16:44.882 ************************************ 00:16:44.882 08:29:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:44.882 00:16:44.882 real 0m10.725s 00:16:44.882 user 0m13.783s 00:16:44.882 sys 0m4.576s 00:16:44.882 08:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:44.882 08:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 08:29:32 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:45.151 08:29:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:45.151 08:29:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:45.151 08:29:32 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:45.151 08:29:32 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:16:45.151 08:29:32 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:45.151 08:29:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 ************************************ 00:16:45.151 START TEST bdev_fio 00:16:45.151 ************************************ 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1132 -- # fio_test_suite '' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:45.151 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1272 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1273 -- # local workload=verify 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1274 -- # local bdev_type=AIO 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1275 -- # local env_context= 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local fio_dir=/usr/src/fio 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # '[' -z verify ']' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -n '' ']' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # cat 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # '[' verify == verify ']' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1306 -- # cat 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' AIO == AIO ']' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # /usr/src/fio/fio --version 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # echo serialize_overlap=1 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:16:45.151 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1108 -- # '[' 11 -le 1 ']' 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:45.152 ************************************ 00:16:45.152 START TEST bdev_fio_rw_verify 00:16:45.152 ************************************ 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1132 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1331 -- # local sanitizers 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # shift 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local asan_lib= 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # grep libasan 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # break 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:45.152 08:29:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:45.411 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:45.411 fio-3.35 00:16:45.411 Starting 6 threads 00:16:57.621 00:16:57.621 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71328: Wed Nov 20 08:29:43 2024 00:16:57.621 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(1290MiB/10001msec) 00:16:57.622 slat (usec): min=2, max=631, avg= 5.96, stdev= 2.76 00:16:57.622 clat (usec): min=115, max=5691, avg=601.66, stdev=155.33 00:16:57.622 lat (usec): min=118, max=5698, avg=607.62, stdev=155.92 00:16:57.622 clat percentiles (usec): 00:16:57.622 | 50.000th=[ 635], 99.000th=[ 930], 99.900th=[ 1336], 99.990th=[ 3523], 00:16:57.622 | 99.999th=[ 5669] 00:16:57.622 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(1298MiB/10001msec); 0 zone resets 00:16:57.622 slat (usec): min=6, max=776, avg=18.07, stdev=15.95 00:16:57.622 clat (usec): min=70, max=5105, avg=656.13, stdev=151.00 00:16:57.622 lat (usec): min=95, max=5122, avg=674.20, stdev=152.00 00:16:57.622 clat percentiles (usec): 00:16:57.622 | 50.000th=[ 676], 99.000th=[ 1057], 99.900th=[ 1532], 99.990th=[ 2147], 00:16:57.622 | 99.999th=[ 5080] 00:16:57.622 bw ( KiB/s): min=111992, max=147689, per=100.00%, avg=133497.95, stdev=1966.77, samples=114 00:16:57.622 iops : min=27998, max=36922, avg=33374.37, stdev=491.69, samples=114 00:16:57.622 lat (usec) : 100=0.01%, 250=3.09%, 500=11.73%, 750=72.69%, 1000=11.51% 00:16:57.622 lat (msec) : 2=0.96%, 4=0.01%, 10=0.01% 00:16:57.622 cpu : usr=62.73%, sys=27.76%, ctx=7368, majf=0, minf=27372 00:16:57.622 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.622 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.622 issued rwts: total=330261,332397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:57.622 00:16:57.622 Run status group 0 (all jobs): 00:16:57.622 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1290MiB (1353MB), run=10001-10001msec 00:16:57.622 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1298MiB (1361MB), run=10001-10001msec 00:16:57.622 ----------------------------------------------------- 00:16:57.622 Suppressions used: 00:16:57.622 count bytes template 00:16:57.622 6 48 /usr/src/fio/parse.c 00:16:57.622 1898 182208 /usr/src/fio/iolog.c 00:16:57.622 1 8 libtcmalloc_minimal.so 00:16:57.622 1 904 libcrypto.so 00:16:57.622 ----------------------------------------------------- 00:16:57.622 00:16:57.622 00:16:57.622 real 0m12.449s 00:16:57.622 user 0m39.565s 00:16:57.622 sys 0m17.102s 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 ************************************ 00:16:57.622 END TEST bdev_fio_rw_verify 00:16:57.622 ************************************ 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1272 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1273 -- # local workload=trim 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1274 -- # local bdev_type= 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1275 -- # local env_context= 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local fio_dir=/usr/src/fio 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # '[' -z trim ']' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -n '' ']' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # cat 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # '[' trim == verify ']' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # '[' trim == trim ']' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo rw=trimwrite 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d7662835-966d-448f-b240-6cc51a353f7b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d7662835-966d-448f-b240-6cc51a353f7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e89abd58-4a05-4d6c-acd6-f1cd07a4e874"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e89abd58-4a05-4d6c-acd6-f1cd07a4e874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "325c66db-ca35-41a6-992e-65d9d53982a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "325c66db-ca35-41a6-992e-65d9d53982a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "761e4519-6d1f-41f3-8567-54c10d161681"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "761e4519-6d1f-41f3-8567-54c10d161681",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d34da384-7897-44b8-93c2-44e5c19765aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d34da384-7897-44b8-93c2-44e5c19765aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "84b3ddd1-c98e-4f66-a61d-1632b0c3f577"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "84b3ddd1-c98e-4f66-a61d-1632b0c3f577",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.622 /home/vagrant/spdk_repo/spdk 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:57.622 00:16:57.622 real 0m12.678s 00:16:57.622 user 0m39.672s 00:16:57.622 sys 0m17.227s 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:57.622 08:29:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:57.622 ************************************ 00:16:57.622 END TEST bdev_fio 00:16:57.622 ************************************ 00:16:57.880 08:29:45 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:57.880 08:29:45 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:57.880 08:29:45 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:16:57.880 08:29:45 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:57.880 08:29:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.880 ************************************ 00:16:57.880 START TEST bdev_verify 00:16:57.880 ************************************ 00:16:57.880 08:29:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:57.880 [2024-11-20 08:29:45.343305] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:57.880 [2024-11-20 08:29:45.343433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71499 ] 00:16:58.136 [2024-11-20 08:29:45.525284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.136 [2024-11-20 08:29:45.641471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.137 [2024-11-20 08:29:45.641508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.703 Running I/O for 5 seconds... 00:17:01.014 22784.00 IOPS, 89.00 MiB/s [2024-11-20T08:29:49.510Z] 23744.00 IOPS, 92.75 MiB/s [2024-11-20T08:29:50.444Z] 24170.67 IOPS, 94.42 MiB/s [2024-11-20T08:29:51.380Z] 24288.00 IOPS, 94.88 MiB/s [2024-11-20T08:29:51.380Z] 24140.80 IOPS, 94.30 MiB/s 00:17:03.819 Latency(us) 00:17:03.819 [2024-11-20T08:29:51.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.819 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0xa0000 00:17:03.819 nvme0n1 : 5.05 1873.88 7.32 0.00 0.00 68202.15 8264.38 63588.34 00:17:03.819 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0xa0000 length 0xa0000 00:17:03.819 nvme0n1 : 5.04 1801.47 7.04 0.00 0.00 70946.63 5948.25 64009.46 00:17:03.819 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0xbd0bd 00:17:03.819 nvme1n1 : 5.05 2731.74 10.67 0.00 0.00 46651.93 5658.73 55587.16 00:17:03.819 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:03.819 nvme1n1 : 5.05 2796.91 10.93 0.00 0.00 45529.35 6448.32 57692.74 00:17:03.819 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0x80000 00:17:03.819 nvme2n1 : 5.04 1879.84 7.34 0.00 0.00 67584.89 6895.76 58113.85 00:17:03.819 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x80000 length 0x80000 00:17:03.819 nvme2n1 : 5.05 1825.21 7.13 0.00 0.00 69840.70 6790.48 64009.46 00:17:03.819 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0x80000 00:17:03.819 nvme2n2 : 5.05 1875.16 7.32 0.00 0.00 67616.86 10054.12 64851.69 00:17:03.819 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x80000 length 0x80000 00:17:03.819 nvme2n2 : 5.04 1802.04 7.04 0.00 0.00 70560.52 6737.84 71168.41 00:17:03.819 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0x80000 00:17:03.819 nvme2n3 : 5.06 1870.97 7.31 0.00 0.00 67687.33 6053.53 59377.20 00:17:03.819 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x80000 length 0x80000 00:17:03.819 nvme2n3 : 5.05 1800.69 7.03 0.00 0.00 70503.72 6553.60 68641.72 00:17:03.819 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x0 length 0x20000 00:17:03.819 nvme3n1 : 5.06 1870.56 7.31 0.00 0.00 67669.98 6500.96 61061.65 00:17:03.819 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.819 Verification LBA range: start 0x20000 length 0x20000 00:17:03.819 nvme3n1 : 5.05 1800.27 7.03 0.00 0.00 70410.81 4316.43 74537.33 00:17:03.819 [2024-11-20T08:29:51.380Z] =================================================================================================================== 00:17:03.819 [2024-11-20T08:29:51.380Z] Total : 23928.73 93.47 0.00 0.00 63766.05 4316.43 74537.33 00:17:05.193 00:17:05.193 real 0m7.077s 00:17:05.193 user 0m10.702s 00:17:05.193 sys 0m2.156s 00:17:05.193 ************************************ 00:17:05.193 END TEST bdev_verify 00:17:05.193 ************************************ 00:17:05.194 08:29:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:05.194 08:29:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 08:29:52 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:05.194 08:29:52 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 16 -le 1 ']' 00:17:05.194 08:29:52 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:05.194 08:29:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 ************************************ 00:17:05.194 START TEST bdev_verify_big_io 00:17:05.194 ************************************ 00:17:05.194 08:29:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:05.194 [2024-11-20 08:29:52.514421] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:05.194 [2024-11-20 08:29:52.514598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 00:17:05.194 [2024-11-20 08:29:52.706798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:05.452 [2024-11-20 08:29:52.821465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.452 [2024-11-20 08:29:52.821495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.020 Running I/O for 5 seconds... 00:17:11.112 1584.00 IOPS, 99.00 MiB/s [2024-11-20T08:29:59.239Z] 3347.50 IOPS, 209.22 MiB/s [2024-11-20T08:29:59.239Z] 3721.33 IOPS, 232.58 MiB/s 00:17:11.678 Latency(us) 00:17:11.678 [2024-11-20T08:29:59.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.678 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0xa000 00:17:11.678 nvme0n1 : 5.62 159.45 9.97 0.00 0.00 771601.83 4842.82 788327.02 00:17:11.678 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0xa000 length 0xa000 00:17:11.678 nvme0n1 : 5.73 178.63 11.16 0.00 0.00 696319.52 5948.25 744531.07 00:17:11.678 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0xbd0b 00:17:11.678 nvme1n1 : 5.50 171.76 10.74 0.00 0.00 706423.08 36847.55 1145432.42 00:17:11.678 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:11.678 nvme1n1 : 5.72 142.62 8.91 0.00 0.00 866051.22 18002.66 1873118.89 00:17:11.678 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0x8000 00:17:11.678 nvme2n1 : 5.69 179.95 11.25 0.00 0.00 664603.96 52639.36 714210.80 00:17:11.678 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x8000 length 0x8000 00:17:11.678 nvme2n1 : 5.73 155.80 9.74 0.00 0.00 765981.02 143179.05 909608.10 00:17:11.678 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0x8000 00:17:11.678 nvme2n2 : 5.69 165.84 10.36 0.00 0.00 702774.90 64009.46 1266713.50 00:17:11.678 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x8000 length 0x8000 00:17:11.678 nvme2n2 : 5.73 165.46 10.34 0.00 0.00 705313.08 25898.56 976986.47 00:17:11.678 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0x8000 00:17:11.678 nvme2n3 : 5.69 154.54 9.66 0.00 0.00 734383.52 53271.03 1327354.04 00:17:11.678 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x8000 length 0x8000 00:17:11.678 nvme2n3 : 5.73 156.24 9.76 0.00 0.00 736855.71 25372.17 1354305.39 00:17:11.678 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x0 length 0x2000 00:17:11.678 nvme3n1 : 5.71 227.01 14.19 0.00 0.00 490098.09 6000.89 663677.02 00:17:11.678 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:11.678 Verification LBA range: start 0x2000 length 0x2000 00:17:11.678 nvme3n1 : 5.74 195.18 12.20 0.00 0.00 575507.13 21055.74 677152.69 00:17:11.678 [2024-11-20T08:29:59.239Z] =================================================================================================================== 00:17:11.678 [2024-11-20T08:29:59.239Z] Total : 2052.48 128.28 0.00 0.00 690070.96 4842.82 1873118.89 00:17:13.054 00:17:13.054 real 0m8.191s 00:17:13.054 user 0m14.786s 00:17:13.054 sys 0m0.579s 00:17:13.054 08:30:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:13.054 ************************************ 00:17:13.054 END TEST bdev_verify_big_io 00:17:13.054 ************************************ 00:17:13.054 08:30:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.313 08:30:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:13.313 08:30:00 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:17:13.313 08:30:00 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:13.313 08:30:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.313 ************************************ 00:17:13.313 START TEST bdev_write_zeroes 00:17:13.313 ************************************ 00:17:13.313 08:30:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:13.313 [2024-11-20 08:30:00.770503] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:13.313 [2024-11-20 08:30:00.770684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71718 ] 00:17:13.572 [2024-11-20 08:30:00.952677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.572 [2024-11-20 08:30:01.093552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.139 Running I/O for 1 seconds... 00:17:15.336 37472.00 IOPS, 146.38 MiB/s 00:17:15.336 Latency(us) 00:17:15.336 [2024-11-20T08:30:02.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.336 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme0n1 : 1.04 5542.15 21.65 0.00 0.00 23077.70 10212.04 33899.75 00:17:15.336 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme1n1 : 1.04 9780.24 38.20 0.00 0.00 13029.46 5079.70 38742.57 00:17:15.336 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme2n1 : 1.04 5529.17 21.60 0.00 0.00 22975.50 7737.99 38742.57 00:17:15.336 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme2n2 : 1.04 5522.20 21.57 0.00 0.00 22987.92 7843.26 41479.81 00:17:15.336 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme2n3 : 1.04 5515.52 21.55 0.00 0.00 22997.71 7895.90 43164.27 00:17:15.336 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:15.336 nvme3n1 : 1.05 5508.88 21.52 0.00 0.00 23006.40 7895.90 44217.06 00:17:15.336 [2024-11-20T08:30:02.897Z] =================================================================================================================== 00:17:15.336 [2024-11-20T08:30:02.897Z] Total : 37398.17 146.09 0.00 0.00 20403.54 5079.70 44217.06 00:17:16.715 00:17:16.715 real 0m3.193s 00:17:16.715 user 0m2.390s 00:17:16.715 sys 0m0.637s 00:17:16.715 08:30:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:16.715 08:30:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:16.715 ************************************ 00:17:16.715 END TEST bdev_write_zeroes 00:17:16.715 ************************************ 00:17:16.715 08:30:03 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:16.715 08:30:03 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:17:16.715 08:30:03 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:16.715 08:30:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.715 ************************************ 00:17:16.715 START TEST bdev_json_nonenclosed 00:17:16.715 ************************************ 00:17:16.715 08:30:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:16.715 [2024-11-20 08:30:04.023633] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:16.715 [2024-11-20 08:30:04.023755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71776 ] 00:17:16.715 [2024-11-20 08:30:04.205205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.975 [2024-11-20 08:30:04.320303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.975 [2024-11-20 08:30:04.320411] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:16.975 [2024-11-20 08:30:04.320434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:16.975 [2024-11-20 08:30:04.320446] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:17.234 00:17:17.234 real 0m0.642s 00:17:17.234 user 0m0.405s 00:17:17.234 sys 0m0.133s 00:17:17.234 08:30:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:17.234 08:30:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:17.234 ************************************ 00:17:17.234 END TEST bdev_json_nonenclosed 00:17:17.234 ************************************ 00:17:17.234 08:30:04 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.234 08:30:04 blockdev_xnvme -- common/autotest_common.sh@1108 -- # '[' 13 -le 1 ']' 00:17:17.234 08:30:04 blockdev_xnvme -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:17.234 08:30:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.234 ************************************ 00:17:17.234 START TEST bdev_json_nonarray 00:17:17.235 ************************************ 00:17:17.235 08:30:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.235 [2024-11-20 08:30:04.741765] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:17.235 [2024-11-20 08:30:04.741880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71803 ] 00:17:17.494 [2024-11-20 08:30:04.924344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.494 [2024-11-20 08:30:05.036693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.494 [2024-11-20 08:30:05.036794] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:17.494 [2024-11-20 08:30:05.036816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:17.494 [2024-11-20 08:30:05.036829] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:17.754 00:17:17.754 real 0m0.642s 00:17:17.754 user 0m0.404s 00:17:17.754 sys 0m0.133s 00:17:17.754 08:30:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:17.754 08:30:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:17.754 ************************************ 00:17:17.754 END TEST bdev_json_nonarray 00:17:17.754 ************************************ 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:18.013 08:30:05 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:18.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:26.711 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:26.711 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:26.711 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:26.711 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:26.711 00:17:26.711 real 1m7.512s 00:17:26.711 user 1m42.106s 00:17:26.711 sys 0m43.232s 00:17:26.711 08:30:14 blockdev_xnvme -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:26.711 08:30:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:26.711 ************************************ 00:17:26.711 END TEST blockdev_xnvme 00:17:26.711 ************************************ 00:17:26.711 08:30:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:26.711 08:30:14 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:26.711 08:30:14 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:26.711 08:30:14 -- common/autotest_common.sh@10 -- # set +x 00:17:26.711 ************************************ 00:17:26.711 START TEST ublk 00:17:26.711 ************************************ 00:17:26.711 08:30:14 ublk -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:26.711 * Looking for test storage... 00:17:26.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:26.971 08:30:14 ublk -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:17:26.971 08:30:14 ublk -- common/autotest_common.sh@1638 -- # lcov --version 00:17:26.971 08:30:14 ublk -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.972 08:30:14 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.972 08:30:14 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.972 08:30:14 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.972 08:30:14 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.972 08:30:14 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.972 08:30:14 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:26.972 08:30:14 ublk -- scripts/common.sh@345 -- # : 1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.972 08:30:14 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.972 08:30:14 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@353 -- # local d=1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.972 08:30:14 ublk -- scripts/common.sh@355 -- # echo 1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.972 08:30:14 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@353 -- # local d=2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.972 08:30:14 ublk -- scripts/common.sh@355 -- # echo 2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.972 08:30:14 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.972 08:30:14 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.972 08:30:14 ublk -- scripts/common.sh@368 -- # return 0 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:17:26.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.972 --rc genhtml_branch_coverage=1 00:17:26.972 --rc genhtml_function_coverage=1 00:17:26.972 --rc genhtml_legend=1 00:17:26.972 --rc geninfo_all_blocks=1 00:17:26.972 --rc geninfo_unexecuted_blocks=1 00:17:26.972 00:17:26.972 ' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:17:26.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.972 --rc genhtml_branch_coverage=1 00:17:26.972 --rc genhtml_function_coverage=1 00:17:26.972 --rc genhtml_legend=1 00:17:26.972 --rc geninfo_all_blocks=1 00:17:26.972 --rc geninfo_unexecuted_blocks=1 00:17:26.972 00:17:26.972 ' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:17:26.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.972 --rc genhtml_branch_coverage=1 00:17:26.972 --rc genhtml_function_coverage=1 00:17:26.972 --rc genhtml_legend=1 00:17:26.972 --rc geninfo_all_blocks=1 00:17:26.972 --rc geninfo_unexecuted_blocks=1 00:17:26.972 00:17:26.972 ' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:17:26.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.972 --rc genhtml_branch_coverage=1 00:17:26.972 --rc genhtml_function_coverage=1 00:17:26.972 --rc genhtml_legend=1 00:17:26.972 --rc geninfo_all_blocks=1 00:17:26.972 --rc geninfo_unexecuted_blocks=1 00:17:26.972 00:17:26.972 ' 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:26.972 08:30:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:26.972 08:30:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:26.972 08:30:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:26.972 08:30:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:26.972 08:30:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:26.972 08:30:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:26.972 08:30:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:26.972 08:30:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:26.972 08:30:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:26.972 08:30:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.972 ************************************ 00:17:26.972 START TEST test_save_ublk_config 00:17:26.972 ************************************ 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1132 -- # test_save_config 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72109 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72109 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # '[' -z 72109 ']' 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:26.972 08:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:26.972 [2024-11-20 08:30:14.525381] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:26.972 [2024-11-20 08:30:14.525522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72109 ] 00:17:27.232 [2024-11-20 08:30:14.707154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.490 [2024-11-20 08:30:14.819468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@871 -- # return 0 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:28.427 [2024-11-20 08:30:15.676011] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:28.427 [2024-11-20 08:30:15.677005] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:28.427 malloc0 00:17:28.427 [2024-11-20 08:30:15.761165] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:28.427 [2024-11-20 08:30:15.761283] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:28.427 [2024-11-20 08:30:15.761296] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:28.427 [2024-11-20 08:30:15.761305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:28.427 [2024-11-20 08:30:15.769037] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:28.427 [2024-11-20 08:30:15.769061] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:28.427 [2024-11-20 08:30:15.777017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:28.427 [2024-11-20 08:30:15.777116] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:28.427 [2024-11-20 08:30:15.801029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:28.427 0 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.427 08:30:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:28.686 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.687 08:30:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:28.687 "subsystems": [ 00:17:28.687 { 00:17:28.687 "subsystem": "fsdev", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "fsdev_set_opts", 00:17:28.687 "params": { 00:17:28.687 "fsdev_io_pool_size": 65535, 00:17:28.687 "fsdev_io_cache_size": 256 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "keyring", 00:17:28.687 "config": [] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "iobuf", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "iobuf_set_options", 00:17:28.687 "params": { 00:17:28.687 "small_pool_count": 8192, 00:17:28.687 "large_pool_count": 1024, 00:17:28.687 "small_bufsize": 8192, 00:17:28.687 "large_bufsize": 135168, 00:17:28.687 "enable_numa": false 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "sock", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "sock_set_default_impl", 00:17:28.687 "params": { 00:17:28.687 "impl_name": "posix" 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "sock_impl_set_options", 00:17:28.687 "params": { 00:17:28.687 "impl_name": "ssl", 00:17:28.687 "recv_buf_size": 4096, 00:17:28.687 "send_buf_size": 4096, 00:17:28.687 "enable_recv_pipe": true, 00:17:28.687 "enable_quickack": false, 00:17:28.687 "enable_placement_id": 0, 00:17:28.687 "enable_zerocopy_send_server": true, 00:17:28.687 "enable_zerocopy_send_client": false, 00:17:28.687 "zerocopy_threshold": 0, 00:17:28.687 "tls_version": 0, 00:17:28.687 "enable_ktls": false 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "sock_impl_set_options", 00:17:28.687 "params": { 00:17:28.687 "impl_name": "posix", 00:17:28.687 "recv_buf_size": 2097152, 00:17:28.687 "send_buf_size": 2097152, 00:17:28.687 "enable_recv_pipe": true, 00:17:28.687 "enable_quickack": false, 00:17:28.687 "enable_placement_id": 0, 00:17:28.687 "enable_zerocopy_send_server": true, 00:17:28.687 "enable_zerocopy_send_client": false, 00:17:28.687 "zerocopy_threshold": 0, 00:17:28.687 "tls_version": 0, 00:17:28.687 "enable_ktls": false 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "vmd", 00:17:28.687 "config": [] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "accel", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "accel_set_options", 00:17:28.687 "params": { 00:17:28.687 "small_cache_size": 128, 00:17:28.687 "large_cache_size": 16, 00:17:28.687 "task_count": 2048, 00:17:28.687 "sequence_count": 2048, 00:17:28.687 "buf_count": 2048 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "bdev", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "bdev_set_options", 00:17:28.687 "params": { 00:17:28.687 "bdev_io_pool_size": 65535, 00:17:28.687 "bdev_io_cache_size": 256, 00:17:28.687 "bdev_auto_examine": true, 00:17:28.687 "iobuf_small_cache_size": 128, 00:17:28.687 "iobuf_large_cache_size": 16 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_raid_set_options", 00:17:28.687 "params": { 00:17:28.687 "process_window_size_kb": 1024, 00:17:28.687 "process_max_bandwidth_mb_sec": 0 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_iscsi_set_options", 00:17:28.687 "params": { 00:17:28.687 "timeout_sec": 30 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_nvme_set_options", 00:17:28.687 "params": { 00:17:28.687 "action_on_timeout": "none", 00:17:28.687 "timeout_us": 0, 00:17:28.687 "timeout_admin_us": 0, 00:17:28.687 "keep_alive_timeout_ms": 10000, 00:17:28.687 "arbitration_burst": 0, 00:17:28.687 "low_priority_weight": 0, 00:17:28.687 "medium_priority_weight": 0, 00:17:28.687 "high_priority_weight": 0, 00:17:28.687 "nvme_adminq_poll_period_us": 10000, 00:17:28.687 "nvme_ioq_poll_period_us": 0, 00:17:28.687 "io_queue_requests": 0, 00:17:28.687 "delay_cmd_submit": true, 00:17:28.687 "transport_retry_count": 4, 00:17:28.687 "bdev_retry_count": 3, 00:17:28.687 "transport_ack_timeout": 0, 00:17:28.687 "ctrlr_loss_timeout_sec": 0, 00:17:28.687 "reconnect_delay_sec": 0, 00:17:28.687 "fast_io_fail_timeout_sec": 0, 00:17:28.687 "disable_auto_failback": false, 00:17:28.687 "generate_uuids": false, 00:17:28.687 "transport_tos": 0, 00:17:28.687 "nvme_error_stat": false, 00:17:28.687 "rdma_srq_size": 0, 00:17:28.687 "io_path_stat": false, 00:17:28.687 "allow_accel_sequence": false, 00:17:28.687 "rdma_max_cq_size": 0, 00:17:28.687 "rdma_cm_event_timeout_ms": 0, 00:17:28.687 "dhchap_digests": [ 00:17:28.687 "sha256", 00:17:28.687 "sha384", 00:17:28.687 "sha512" 00:17:28.687 ], 00:17:28.687 "dhchap_dhgroups": [ 00:17:28.687 "null", 00:17:28.687 "ffdhe2048", 00:17:28.687 "ffdhe3072", 00:17:28.687 "ffdhe4096", 00:17:28.687 "ffdhe6144", 00:17:28.687 "ffdhe8192" 00:17:28.687 ] 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_nvme_set_hotplug", 00:17:28.687 "params": { 00:17:28.687 "period_us": 100000, 00:17:28.687 "enable": false 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_malloc_create", 00:17:28.687 "params": { 00:17:28.687 "name": "malloc0", 00:17:28.687 "num_blocks": 8192, 00:17:28.687 "block_size": 4096, 00:17:28.687 "physical_block_size": 4096, 00:17:28.687 "uuid": "a0bba888-8a59-4a37-b1c4-1d15fb549397", 00:17:28.687 "optimal_io_boundary": 0, 00:17:28.687 "md_size": 0, 00:17:28.687 "dif_type": 0, 00:17:28.687 "dif_is_head_of_md": false, 00:17:28.687 "dif_pi_format": 0 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "bdev_wait_for_examine" 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "scsi", 00:17:28.687 "config": null 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "scheduler", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "framework_set_scheduler", 00:17:28.687 "params": { 00:17:28.687 "name": "static" 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "vhost_scsi", 00:17:28.687 "config": [] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "vhost_blk", 00:17:28.687 "config": [] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "ublk", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "ublk_create_target", 00:17:28.687 "params": { 00:17:28.687 "cpumask": "1" 00:17:28.687 } 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "method": "ublk_start_disk", 00:17:28.687 "params": { 00:17:28.687 "bdev_name": "malloc0", 00:17:28.687 "ublk_id": 0, 00:17:28.687 "num_queues": 1, 00:17:28.687 "queue_depth": 128 00:17:28.687 } 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "nbd", 00:17:28.687 "config": [] 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "subsystem": "nvmf", 00:17:28.687 "config": [ 00:17:28.687 { 00:17:28.687 "method": "nvmf_set_config", 00:17:28.687 "params": { 00:17:28.687 "discovery_filter": "match_any", 00:17:28.687 "admin_cmd_passthru": { 00:17:28.687 "identify_ctrlr": false 00:17:28.687 }, 00:17:28.687 "dhchap_digests": [ 00:17:28.687 "sha256", 00:17:28.687 "sha384", 00:17:28.687 "sha512" 00:17:28.687 ], 00:17:28.687 "dhchap_dhgroups": [ 00:17:28.687 "null", 00:17:28.687 "ffdhe2048", 00:17:28.687 "ffdhe3072", 00:17:28.687 "ffdhe4096", 00:17:28.688 "ffdhe6144", 00:17:28.688 "ffdhe8192" 00:17:28.688 ] 00:17:28.688 } 00:17:28.688 }, 00:17:28.688 { 00:17:28.688 "method": "nvmf_set_max_subsystems", 00:17:28.688 "params": { 00:17:28.688 "max_subsystems": 1024 00:17:28.688 } 00:17:28.688 }, 00:17:28.688 { 00:17:28.688 "method": "nvmf_set_crdt", 00:17:28.688 "params": { 00:17:28.688 "crdt1": 0, 00:17:28.688 "crdt2": 0, 00:17:28.688 "crdt3": 0 00:17:28.688 } 00:17:28.688 } 00:17:28.688 ] 00:17:28.688 }, 00:17:28.688 { 00:17:28.688 "subsystem": "iscsi", 00:17:28.688 "config": [ 00:17:28.688 { 00:17:28.688 "method": "iscsi_set_options", 00:17:28.688 "params": { 00:17:28.688 "node_base": "iqn.2016-06.io.spdk", 00:17:28.688 "max_sessions": 128, 00:17:28.688 "max_connections_per_session": 2, 00:17:28.688 "max_queue_depth": 64, 00:17:28.688 "default_time2wait": 2, 00:17:28.688 "default_time2retain": 20, 00:17:28.688 "first_burst_length": 8192, 00:17:28.688 "immediate_data": true, 00:17:28.688 "allow_duplicated_isid": false, 00:17:28.688 "error_recovery_level": 0, 00:17:28.688 "nop_timeout": 60, 00:17:28.688 "nop_in_interval": 30, 00:17:28.688 "disable_chap": false, 00:17:28.688 "require_chap": false, 00:17:28.688 "mutual_chap": false, 00:17:28.688 "chap_group": 0, 00:17:28.688 "max_large_datain_per_connection": 64, 00:17:28.688 "max_r2t_per_connection": 4, 00:17:28.688 "pdu_pool_size": 36864, 00:17:28.688 "immediate_data_pool_size": 16384, 00:17:28.688 "data_out_pool_size": 2048 00:17:28.688 } 00:17:28.688 } 00:17:28.688 ] 00:17:28.688 } 00:17:28.688 ] 00:17:28.688 }' 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72109 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' -z 72109 ']' 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@961 -- # kill -0 72109 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # uname 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72109 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:17:28.688 killing process with pid 72109 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72109' 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # kill 72109 00:17:28.688 08:30:16 ublk.test_save_ublk_config -- common/autotest_common.sh@981 -- # wait 72109 00:17:30.065 [2024-11-20 08:30:17.549363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:30.065 [2024-11-20 08:30:17.588058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:30.065 [2024-11-20 08:30:17.588179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:30.065 [2024-11-20 08:30:17.600041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:30.065 [2024-11-20 08:30:17.600094] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:30.065 [2024-11-20 08:30:17.600110] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:30.065 [2024-11-20 08:30:17.600133] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:30.065 [2024-11-20 08:30:17.600287] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72175 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72175 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # '[' -z 72175 ']' 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:31.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:31.971 08:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:31.971 "subsystems": [ 00:17:31.971 { 00:17:31.971 "subsystem": "fsdev", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "fsdev_set_opts", 00:17:31.971 "params": { 00:17:31.971 "fsdev_io_pool_size": 65535, 00:17:31.971 "fsdev_io_cache_size": 256 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "keyring", 00:17:31.971 "config": [] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "iobuf", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "iobuf_set_options", 00:17:31.971 "params": { 00:17:31.971 "small_pool_count": 8192, 00:17:31.971 "large_pool_count": 1024, 00:17:31.971 "small_bufsize": 8192, 00:17:31.971 "large_bufsize": 135168, 00:17:31.971 "enable_numa": false 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "sock", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "sock_set_default_impl", 00:17:31.971 "params": { 00:17:31.971 "impl_name": "posix" 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "sock_impl_set_options", 00:17:31.971 "params": { 00:17:31.971 "impl_name": "ssl", 00:17:31.971 "recv_buf_size": 4096, 00:17:31.971 "send_buf_size": 4096, 00:17:31.971 "enable_recv_pipe": true, 00:17:31.971 "enable_quickack": false, 00:17:31.971 "enable_placement_id": 0, 00:17:31.971 "enable_zerocopy_send_server": true, 00:17:31.971 "enable_zerocopy_send_client": false, 00:17:31.971 "zerocopy_threshold": 0, 00:17:31.971 "tls_version": 0, 00:17:31.971 "enable_ktls": false 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "sock_impl_set_options", 00:17:31.971 "params": { 00:17:31.971 "impl_name": "posix", 00:17:31.971 "recv_buf_size": 2097152, 00:17:31.971 "send_buf_size": 2097152, 00:17:31.971 "enable_recv_pipe": true, 00:17:31.971 "enable_quickack": false, 00:17:31.971 "enable_placement_id": 0, 00:17:31.971 "enable_zerocopy_send_server": true, 00:17:31.971 "enable_zerocopy_send_client": false, 00:17:31.971 "zerocopy_threshold": 0, 00:17:31.971 "tls_version": 0, 00:17:31.971 "enable_ktls": false 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "vmd", 00:17:31.971 "config": [] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "accel", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "accel_set_options", 00:17:31.971 "params": { 00:17:31.971 "small_cache_size": 128, 00:17:31.971 "large_cache_size": 16, 00:17:31.971 "task_count": 2048, 00:17:31.971 "sequence_count": 2048, 00:17:31.971 "buf_count": 2048 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "bdev", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "bdev_set_options", 00:17:31.971 "params": { 00:17:31.971 "bdev_io_pool_size": 65535, 00:17:31.971 "bdev_io_cache_size": 256, 00:17:31.971 "bdev_auto_examine": true, 00:17:31.971 "iobuf_small_cache_size": 128, 00:17:31.971 "iobuf_large_cache_size": 16 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_raid_set_options", 00:17:31.971 "params": { 00:17:31.971 "process_window_size_kb": 1024, 00:17:31.971 "process_max_bandwidth_mb_sec": 0 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_iscsi_set_options", 00:17:31.971 "params": { 00:17:31.971 "timeout_sec": 30 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_nvme_set_options", 00:17:31.971 "params": { 00:17:31.971 "action_on_timeout": "none", 00:17:31.971 "timeout_us": 0, 00:17:31.971 "timeout_admin_us": 0, 00:17:31.971 "keep_alive_timeout_ms": 10000, 00:17:31.971 "arbitration_burst": 0, 00:17:31.971 "low_priority_weight": 0, 00:17:31.971 "medium_priority_weight": 0, 00:17:31.971 "high_priority_weight": 0, 00:17:31.971 "nvme_adminq_poll_period_us": 10000, 00:17:31.971 "nvme_ioq_poll_period_us": 0, 00:17:31.971 "io_queue_requests": 0, 00:17:31.971 "delay_cmd_submit": true, 00:17:31.971 "transport_retry_count": 4, 00:17:31.971 "bdev_retry_count": 3, 00:17:31.971 "transport_ack_timeout": 0, 00:17:31.971 "ctrlr_loss_timeout_sec": 0, 00:17:31.971 "reconnect_delay_sec": 0, 00:17:31.971 "fast_io_fail_timeout_sec": 0, 00:17:31.971 "disable_auto_failback": false, 00:17:31.971 "generate_uuids": false, 00:17:31.971 "transport_tos": 0, 00:17:31.971 "nvme_error_stat": false, 00:17:31.971 "rdma_srq_size": 0, 00:17:31.971 "io_path_stat": false, 00:17:31.971 "allow_accel_sequence": false, 00:17:31.971 "rdma_max_cq_size": 0, 00:17:31.971 "rdma_cm_event_timeout_ms": 0, 00:17:31.971 "dhchap_digests": [ 00:17:31.971 "sha256", 00:17:31.971 "sha384", 00:17:31.971 "sha512" 00:17:31.971 ], 00:17:31.971 "dhchap_dhgroups": [ 00:17:31.971 "null", 00:17:31.971 "ffdhe2048", 00:17:31.971 "ffdhe3072", 00:17:31.971 "ffdhe4096", 00:17:31.971 "ffdhe6144", 00:17:31.971 "ffdhe8192" 00:17:31.971 ] 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_nvme_set_hotplug", 00:17:31.971 "params": { 00:17:31.971 "period_us": 100000, 00:17:31.971 "enable": false 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_malloc_create", 00:17:31.971 "params": { 00:17:31.971 "name": "malloc0", 00:17:31.971 "num_blocks": 8192, 00:17:31.971 "block_size": 4096, 00:17:31.971 "physical_block_size": 4096, 00:17:31.971 "uuid": "a0bba888-8a59-4a37-b1c4-1d15fb549397", 00:17:31.971 "optimal_io_boundary": 0, 00:17:31.971 "md_size": 0, 00:17:31.971 "dif_type": 0, 00:17:31.971 "dif_is_head_of_md": false, 00:17:31.971 "dif_pi_format": 0 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "bdev_wait_for_examine" 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "scsi", 00:17:31.971 "config": null 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "scheduler", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "framework_set_scheduler", 00:17:31.971 "params": { 00:17:31.971 "name": "static" 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "vhost_scsi", 00:17:31.971 "config": [] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "vhost_blk", 00:17:31.971 "config": [] 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "subsystem": "ublk", 00:17:31.971 "config": [ 00:17:31.971 { 00:17:31.971 "method": "ublk_create_target", 00:17:31.971 "params": { 00:17:31.971 "cpumask": "1" 00:17:31.971 } 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "method": "ublk_start_disk", 00:17:31.971 "params": { 00:17:31.971 "bdev_name": "malloc0", 00:17:31.971 "ublk_id": 0, 00:17:31.971 "num_queues": 1, 00:17:31.971 "queue_depth": 128 00:17:31.971 } 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }, 00:17:31.971 { 00:17:31.972 "subsystem": "nbd", 00:17:31.972 "config": [] 00:17:31.972 }, 00:17:31.972 { 00:17:31.972 "subsystem": "nvmf", 00:17:31.972 "config": [ 00:17:31.972 { 00:17:31.972 "method": "nvmf_set_config", 00:17:31.972 "params": { 00:17:31.972 "discovery_filter": "match_any", 00:17:31.972 "admin_cmd_passthru": { 00:17:31.972 "identify_ctrlr": false 00:17:31.972 }, 00:17:31.972 "dhchap_digests": [ 00:17:31.972 "sha256", 00:17:31.972 "sha384", 00:17:31.972 "sha512" 00:17:31.972 ], 00:17:31.972 "dhchap_dhgroups": [ 00:17:31.972 "null", 00:17:31.972 "ffdhe2048", 00:17:31.972 "ffdhe3072", 00:17:31.972 "ffdhe4096", 00:17:31.972 "ffdhe6144", 00:17:31.972 "ffdhe8192" 00:17:31.972 ] 00:17:31.972 } 00:17:31.972 }, 00:17:31.972 { 00:17:31.972 "method": "nvmf_set_max_subsystems", 00:17:31.972 "params": { 00:17:31.972 "max_subsystems": 1024 00:17:31.972 } 00:17:31.972 }, 00:17:31.972 { 00:17:31.972 "method": "nvmf_set_crdt", 00:17:31.972 "params": { 00:17:31.972 "crdt1": 0, 00:17:31.972 "crdt2": 0, 00:17:31.972 "crdt3": 0 00:17:31.972 } 00:17:31.972 } 00:17:31.972 ] 00:17:31.972 }, 00:17:31.972 { 00:17:31.972 "subsystem": "iscsi", 00:17:31.972 "config": [ 00:17:31.972 { 00:17:31.972 "method": "iscsi_set_options", 00:17:31.972 "params": { 00:17:31.972 "node_base": "iqn.2016-06.io.spdk", 00:17:31.972 "max_sessions": 128, 00:17:31.972 "max_connections_per_session": 2, 00:17:31.972 "max_queue_depth": 64, 00:17:31.972 "default_time2wait": 2, 00:17:31.972 "default_time2retain": 20, 00:17:31.972 "first_burst_length": 8192, 00:17:31.972 "immediate_data": true, 00:17:31.972 "allow_duplicated_isid": false, 00:17:31.972 "error_recovery_level": 0, 00:17:31.972 "nop_timeout": 60, 00:17:31.972 "nop_in_interval": 30, 00:17:31.972 "disable_chap": false, 00:17:31.972 "require_chap": false, 00:17:31.972 "mutual_chap": false, 00:17:31.972 "chap_group": 0, 00:17:31.972 "max_large_datain_per_connection": 64, 00:17:31.972 "max_r2t_per_connection": 4, 00:17:31.972 "pdu_pool_size": 36864, 00:17:31.972 "immediate_data_pool_size": 16384, 00:17:31.972 "data_out_pool_size": 2048 00:17:31.972 } 00:17:31.972 } 00:17:31.972 ] 00:17:31.972 } 00:17:31.972 ] 00:17:31.972 }' 00:17:31.972 [2024-11-20 08:30:19.524843] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:31.972 [2024-11-20 08:30:19.525035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72175 ] 00:17:32.231 [2024-11-20 08:30:19.737058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.490 [2024-11-20 08:30:19.841897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.497 [2024-11-20 08:30:20.832008] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:33.497 [2024-11-20 08:30:20.833160] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:33.497 [2024-11-20 08:30:20.840150] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:33.497 [2024-11-20 08:30:20.840243] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:33.497 [2024-11-20 08:30:20.840255] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:33.497 [2024-11-20 08:30:20.840263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:33.497 [2024-11-20 08:30:20.849088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:33.497 [2024-11-20 08:30:20.849114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:33.497 [2024-11-20 08:30:20.856020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:33.497 [2024-11-20 08:30:20.856113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:33.497 [2024-11-20 08:30:20.872007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@871 -- # return 0 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72175 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' -z 72175 ']' 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@961 -- # kill -0 72175 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # uname 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:33.497 08:30:20 ublk.test_save_ublk_config -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72175 00:17:33.497 killing process with pid 72175 00:17:33.497 08:30:21 ublk.test_save_ublk_config -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:17:33.497 08:30:21 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:17:33.497 08:30:21 ublk.test_save_ublk_config -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72175' 00:17:33.497 08:30:21 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # kill 72175 00:17:33.497 08:30:21 ublk.test_save_ublk_config -- common/autotest_common.sh@981 -- # wait 72175 00:17:35.420 [2024-11-20 08:30:22.620566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.420 [2024-11-20 08:30:22.650029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.420 [2024-11-20 08:30:22.650164] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.420 [2024-11-20 08:30:22.659038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.420 [2024-11-20 08:30:22.659090] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:35.420 [2024-11-20 08:30:22.659099] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:35.420 [2024-11-20 08:30:22.659125] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:35.420 [2024-11-20 08:30:22.659261] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:37.325 08:30:24 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:37.325 00:17:37.325 real 0m10.016s 00:17:37.325 user 0m7.497s 00:17:37.325 sys 0m3.248s 00:17:37.325 08:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:37.325 08:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.325 ************************************ 00:17:37.325 END TEST test_save_ublk_config 00:17:37.325 ************************************ 00:17:37.325 08:30:24 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72266 00:17:37.325 08:30:24 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:37.325 08:30:24 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.325 08:30:24 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72266 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@838 -- # '[' -z 72266 ']' 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:37.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:37.325 08:30:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:37.325 [2024-11-20 08:30:24.597229] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:37.325 [2024-11-20 08:30:24.597368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72266 ] 00:17:37.325 [2024-11-20 08:30:24.777533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:37.325 [2024-11-20 08:30:24.881571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.325 [2024-11-20 08:30:24.881604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.260 08:30:25 ublk -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:38.260 08:30:25 ublk -- common/autotest_common.sh@871 -- # return 0 00:17:38.260 08:30:25 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:38.260 08:30:25 ublk -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:38.260 08:30:25 ublk -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:38.260 08:30:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 ************************************ 00:17:38.260 START TEST test_create_ublk 00:17:38.260 ************************************ 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@1132 -- # test_create_ublk 00:17:38.260 08:30:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 [2024-11-20 08:30:25.741008] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:38.260 [2024-11-20 08:30:25.743412] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.260 08:30:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:38.260 08:30:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.260 08:30:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:38.517 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.517 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:38.517 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:38.517 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.517 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:38.517 [2024-11-20 08:30:26.034173] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:38.517 [2024-11-20 08:30:26.034628] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:38.517 [2024-11-20 08:30:26.034646] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:38.517 [2024-11-20 08:30:26.034655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:38.517 [2024-11-20 08:30:26.043315] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:38.517 [2024-11-20 08:30:26.043337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:38.517 [2024-11-20 08:30:26.050017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:38.517 [2024-11-20 08:30:26.060060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:38.776 [2024-11-20 08:30:26.084020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:38.776 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:38.776 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.776 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:38.776 08:30:26 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:38.776 { 00:17:38.776 "ublk_device": "/dev/ublkb0", 00:17:38.776 "id": 0, 00:17:38.776 "queue_depth": 512, 00:17:38.776 "num_queues": 4, 00:17:38.776 "bdev_name": "Malloc0" 00:17:38.776 } 00:17:38.776 ]' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:38.776 08:30:26 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:39.034 fio: verification read phase will never start because write phase uses all of runtime 00:17:39.034 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:39.034 fio-3.35 00:17:39.034 Starting 1 process 00:17:49.010 00:17:49.010 fio_test: (groupid=0, jobs=1): err= 0: pid=72318: Wed Nov 20 08:30:36 2024 00:17:49.010 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(551MiB/10001msec); 0 zone resets 00:17:49.010 clat (usec): min=37, max=4091, avg=70.09, stdev=103.08 00:17:49.010 lat (usec): min=38, max=4092, avg=70.55, stdev=103.08 00:17:49.010 clat percentiles (usec): 00:17:49.010 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 53], 00:17:49.010 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:17:49.010 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 137], 95.00th=[ 149], 00:17:49.010 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 1991], 99.95th=[ 2966], 00:17:49.010 | 99.99th=[ 3556] 00:17:49.010 bw ( KiB/s): min=26632, max=76183, per=99.25%, avg=55990.68, stdev=18026.64, samples=19 00:17:49.010 iops : min= 6658, max=19045, avg=13997.63, stdev=4506.61, samples=19 00:17:49.010 lat (usec) : 50=5.05%, 100=82.20%, 250=12.55%, 500=0.01%, 750=0.02% 00:17:49.010 lat (usec) : 1000=0.01% 00:17:49.010 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:17:49.010 cpu : usr=2.96%, sys=8.77%, ctx=141041, majf=0, minf=798 00:17:49.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:49.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.010 issued rwts: total=0,141040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:49.010 00:17:49.010 Run status group 0 (all jobs): 00:17:49.010 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=551MiB (578MB), run=10001-10001msec 00:17:49.010 00:17:49.010 Disk stats (read/write): 00:17:49.010 ublkb0: ios=0/139349, merge=0/0, ticks=0/8805, in_queue=8806, util=99.13% 00:17:49.270 08:30:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.270 [2024-11-20 08:30:36.575187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:49.270 [2024-11-20 08:30:36.612051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:49.270 [2024-11-20 08:30:36.612763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:49.270 [2024-11-20 08:30:36.620029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:49.270 [2024-11-20 08:30:36.620290] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:49.270 [2024-11-20 08:30:36.620304] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:49.270 08:30:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # local es=0 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@658 -- # rpc_cmd ublk_stop_disk 0 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.270 [2024-11-20 08:30:36.643096] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:49.270 request: 00:17:49.270 { 00:17:49.270 "ublk_id": 0, 00:17:49.270 "method": "ublk_stop_disk", 00:17:49.270 "req_id": 1 00:17:49.270 } 00:17:49.270 Got JSON-RPC error response 00:17:49.270 response: 00:17:49.270 { 00:17:49.270 "code": -19, 00:17:49.270 "message": "No such device" 00:17:49.270 } 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@658 -- # es=1 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:49.270 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:49.270 08:30:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:49.271 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:49.271 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.271 [2024-11-20 08:30:36.658105] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:49.271 [2024-11-20 08:30:36.667010] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:49.271 [2024-11-20 08:30:36.667057] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:49.271 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:49.271 08:30:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:49.271 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:49.271 08:30:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.839 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:49.839 08:30:37 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:49.839 08:30:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:49.839 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:49.839 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.839 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:49.840 08:30:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:49.840 08:30:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:50.098 08:30:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:50.098 08:30:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:50.098 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.098 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.098 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.098 08:30:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:50.098 08:30:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:50.098 ************************************ 00:17:50.098 END TEST test_create_ublk 00:17:50.098 ************************************ 00:17:50.098 08:30:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:50.098 00:17:50.098 real 0m11.773s 00:17:50.098 user 0m0.694s 00:17:50.098 sys 0m0.996s 00:17:50.098 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:50.098 08:30:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.098 08:30:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:50.098 08:30:37 ublk -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:50.098 08:30:37 ublk -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:50.098 08:30:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.098 ************************************ 00:17:50.098 START TEST test_create_multi_ublk 00:17:50.098 ************************************ 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1132 -- # test_create_multi_ublk 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.098 [2024-11-20 08:30:37.586004] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:50.098 [2024-11-20 08:30:37.588511] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.098 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.358 [2024-11-20 08:30:37.866155] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:50.358 [2024-11-20 08:30:37.866598] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:50.358 [2024-11-20 08:30:37.866615] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:50.358 [2024-11-20 08:30:37.866636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:50.358 [2024-11-20 08:30:37.875255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:50.358 [2024-11-20 08:30:37.875281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:50.358 [2024-11-20 08:30:37.882023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:50.358 [2024-11-20 08:30:37.882594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:50.358 [2024-11-20 08:30:37.892058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.358 08:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.617 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:50.617 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:50.617 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.617 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 [2024-11-20 08:30:38.175147] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:50.617 [2024-11-20 08:30:38.175582] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:50.617 [2024-11-20 08:30:38.175602] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:50.617 [2024-11-20 08:30:38.175610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:50.878 [2024-11-20 08:30:38.183051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:50.878 [2024-11-20 08:30:38.183069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:50.878 [2024-11-20 08:30:38.191021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:50.878 [2024-11-20 08:30:38.191578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:50.878 [2024-11-20 08:30:38.196547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:50.878 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.142 [2024-11-20 08:30:38.491144] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:51.142 [2024-11-20 08:30:38.491586] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:51.142 [2024-11-20 08:30:38.491603] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:51.142 [2024-11-20 08:30:38.491613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:51.142 [2024-11-20 08:30:38.499035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:51.142 [2024-11-20 08:30:38.499062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:51.142 [2024-11-20 08:30:38.507026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:51.142 [2024-11-20 08:30:38.507594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:51.142 [2024-11-20 08:30:38.515092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:51.142 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:51.143 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:51.143 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.405 [2024-11-20 08:30:38.810166] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:51.405 [2024-11-20 08:30:38.810603] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:51.405 [2024-11-20 08:30:38.810624] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:51.405 [2024-11-20 08:30:38.810632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:51.405 [2024-11-20 08:30:38.818030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:51.405 [2024-11-20 08:30:38.818053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:51.405 [2024-11-20 08:30:38.826019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:51.405 [2024-11-20 08:30:38.826592] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:51.405 [2024-11-20 08:30:38.842031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:51.405 { 00:17:51.405 "ublk_device": "/dev/ublkb0", 00:17:51.405 "id": 0, 00:17:51.405 "queue_depth": 512, 00:17:51.405 "num_queues": 4, 00:17:51.405 "bdev_name": "Malloc0" 00:17:51.405 }, 00:17:51.405 { 00:17:51.405 "ublk_device": "/dev/ublkb1", 00:17:51.405 "id": 1, 00:17:51.405 "queue_depth": 512, 00:17:51.405 "num_queues": 4, 00:17:51.405 "bdev_name": "Malloc1" 00:17:51.405 }, 00:17:51.405 { 00:17:51.405 "ublk_device": "/dev/ublkb2", 00:17:51.405 "id": 2, 00:17:51.405 "queue_depth": 512, 00:17:51.405 "num_queues": 4, 00:17:51.405 "bdev_name": "Malloc2" 00:17:51.405 }, 00:17:51.405 { 00:17:51.405 "ublk_device": "/dev/ublkb3", 00:17:51.405 "id": 3, 00:17:51.405 "queue_depth": 512, 00:17:51.405 "num_queues": 4, 00:17:51.405 "bdev_name": "Malloc3" 00:17:51.405 } 00:17:51.405 ]' 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:51.405 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:51.664 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:51.664 08:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:51.664 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:51.922 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:52.181 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:52.441 [2024-11-20 08:30:39.748125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.441 [2024-11-20 08:30:39.786078] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.441 [2024-11-20 08:30:39.786960] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.441 [2024-11-20 08:30:39.796014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.441 [2024-11-20 08:30:39.796305] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:52.441 [2024-11-20 08:30:39.796323] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:52.441 [2024-11-20 08:30:39.804096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.441 [2024-11-20 08:30:39.843111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.441 [2024-11-20 08:30:39.843918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.441 [2024-11-20 08:30:39.851024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.441 [2024-11-20 08:30:39.851289] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:52.441 [2024-11-20 08:30:39.851302] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:52.441 [2024-11-20 08:30:39.867105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.441 [2024-11-20 08:30:39.907021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.441 [2024-11-20 08:30:39.907837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.441 [2024-11-20 08:30:39.916068] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.441 [2024-11-20 08:30:39.916315] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:52.441 [2024-11-20 08:30:39.916327] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:52.441 [2024-11-20 08:30:39.931110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.441 [2024-11-20 08:30:39.967077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.441 [2024-11-20 08:30:39.967794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.441 [2024-11-20 08:30:39.975028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.441 [2024-11-20 08:30:39.975284] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:52.441 [2024-11-20 08:30:39.975296] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:52.441 08:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:52.701 [2024-11-20 08:30:40.168109] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:52.701 [2024-11-20 08:30:40.176014] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:52.701 [2024-11-20 08:30:40.176052] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:52.701 08:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:52.701 08:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:52.701 08:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:52.701 08:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:52.701 08:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.638 08:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:53.638 08:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.638 08:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:53.638 08:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:53.638 08:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.208 08:30:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:54.208 08:30:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:54.208 08:30:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:54.208 08:30:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:54.208 08:30:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.776 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:54.776 08:30:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:54.776 08:30:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:54.776 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:54.776 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:55.035 00:17:55.035 real 0m5.005s 00:17:55.035 user 0m1.020s 00:17:55.035 sys 0m0.235s 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:55.035 ************************************ 00:17:55.035 END TEST test_create_multi_ublk 00:17:55.035 08:30:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 ************************************ 00:17:55.294 08:30:42 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:55.294 08:30:42 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:55.294 08:30:42 ublk -- ublk/ublk.sh@130 -- # killprocess 72266 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@957 -- # '[' -z 72266 ']' 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@961 -- # kill -0 72266 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@962 -- # uname 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72266 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72266' 00:17:55.294 killing process with pid 72266 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@976 -- # kill 72266 00:17:55.294 08:30:42 ublk -- common/autotest_common.sh@981 -- # wait 72266 00:17:56.259 [2024-11-20 08:30:43.799794] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:56.259 [2024-11-20 08:30:43.799845] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:57.636 00:17:57.636 real 0m30.872s 00:17:57.636 user 0m44.209s 00:17:57.636 sys 0m10.463s 00:17:57.636 08:30:44 ublk -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:57.636 08:30:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:57.636 ************************************ 00:17:57.636 END TEST ublk 00:17:57.636 ************************************ 00:17:57.636 08:30:45 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:57.636 08:30:45 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:57.636 08:30:45 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:57.636 08:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:57.636 ************************************ 00:17:57.636 START TEST ublk_recovery 00:17:57.636 ************************************ 00:17:57.636 08:30:45 ublk_recovery -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:57.896 * Looking for test storage... 00:17:57.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1638 -- # lcov --version 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.896 08:30:45 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:17:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.896 --rc genhtml_branch_coverage=1 00:17:57.896 --rc genhtml_function_coverage=1 00:17:57.896 --rc genhtml_legend=1 00:17:57.896 --rc geninfo_all_blocks=1 00:17:57.896 --rc geninfo_unexecuted_blocks=1 00:17:57.896 00:17:57.896 ' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:17:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.896 --rc genhtml_branch_coverage=1 00:17:57.896 --rc genhtml_function_coverage=1 00:17:57.896 --rc genhtml_legend=1 00:17:57.896 --rc geninfo_all_blocks=1 00:17:57.896 --rc geninfo_unexecuted_blocks=1 00:17:57.896 00:17:57.896 ' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:17:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.896 --rc genhtml_branch_coverage=1 00:17:57.896 --rc genhtml_function_coverage=1 00:17:57.896 --rc genhtml_legend=1 00:17:57.896 --rc geninfo_all_blocks=1 00:17:57.896 --rc geninfo_unexecuted_blocks=1 00:17:57.896 00:17:57.896 ' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:17:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.896 --rc genhtml_branch_coverage=1 00:17:57.896 --rc genhtml_function_coverage=1 00:17:57.896 --rc genhtml_legend=1 00:17:57.896 --rc geninfo_all_blocks=1 00:17:57.896 --rc geninfo_unexecuted_blocks=1 00:17:57.896 00:17:57.896 ' 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:57.896 08:30:45 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72708 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.896 08:30:45 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72708 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@838 -- # '[' -z 72708 ']' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:57.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:57.896 08:30:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.896 [2024-11-20 08:30:45.449937] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:57.896 [2024-11-20 08:30:45.450241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72708 ] 00:17:58.154 [2024-11-20 08:30:45.633295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.413 [2024-11-20 08:30:45.742735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.413 [2024-11-20 08:30:45.742769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@871 -- # return 0 00:17:59.350 08:30:46 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.350 [2024-11-20 08:30:46.591024] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:59.350 [2024-11-20 08:30:46.593554] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:59.350 08:30:46 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.350 malloc0 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:59.350 08:30:46 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.350 [2024-11-20 08:30:46.746159] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:59.350 [2024-11-20 08:30:46.746285] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:59.350 [2024-11-20 08:30:46.746307] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:59.350 [2024-11-20 08:30:46.746318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:59.350 [2024-11-20 08:30:46.754030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:59.350 [2024-11-20 08:30:46.754055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:59.350 [2024-11-20 08:30:46.762020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:59.350 [2024-11-20 08:30:46.762161] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:59.350 [2024-11-20 08:30:46.784028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:59.350 1 00:17:59.350 08:30:46 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:59.350 08:30:46 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:00.287 08:30:47 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=72745 00:18:00.287 08:30:47 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:00.287 08:30:47 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:00.548 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.548 fio-3.35 00:18:00.548 Starting 1 process 00:18:05.824 08:30:52 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72708 00:18:05.824 08:30:52 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:11.100 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72708 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:11.100 08:30:57 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=72856 00:18:11.100 08:30:57 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:11.100 08:30:57 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.100 08:30:57 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 72856 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@838 -- # '[' -z 72856 ']' 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:11.100 08:30:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.100 [2024-11-20 08:30:57.927189] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:11.100 [2024-11-20 08:30:57.927585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72856 ] 00:18:11.100 [2024-11-20 08:30:58.107872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:11.100 [2024-11-20 08:30:58.224125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.100 [2024-11-20 08:30:58.224156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@871 -- # return 0 00:18:11.668 08:30:59 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.668 [2024-11-20 08:30:59.092009] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:11.668 [2024-11-20 08:30:59.094555] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:11.668 08:30:59 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:11.668 08:30:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.927 malloc0 00:18:11.927 08:30:59 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:11.927 08:30:59 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:11.927 08:30:59 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:11.927 08:30:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.927 [2024-11-20 08:30:59.251168] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:11.927 [2024-11-20 08:30:59.251208] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:11.927 [2024-11-20 08:30:59.251220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:11.927 [2024-11-20 08:30:59.259049] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:11.927 [2024-11-20 08:30:59.259076] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:11.927 1 00:18:11.927 08:30:59 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:11.927 08:30:59 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 72745 00:18:12.865 [2024-11-20 08:31:00.257487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:12.865 [2024-11-20 08:31:00.264057] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:12.865 [2024-11-20 08:31:00.264081] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:13.802 [2024-11-20 08:31:01.262501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:13.802 [2024-11-20 08:31:01.269012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:13.802 [2024-11-20 08:31:01.269037] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:14.742 [2024-11-20 08:31:02.269038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:14.742 [2024-11-20 08:31:02.277024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:14.742 [2024-11-20 08:31:02.277045] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:14.742 [2024-11-20 08:31:02.277059] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:14.742 [2024-11-20 08:31:02.277163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:36.703 [2024-11-20 08:31:23.028049] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:36.703 [2024-11-20 08:31:23.035586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:36.703 [2024-11-20 08:31:23.043297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:36.703 [2024-11-20 08:31:23.043322] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:03.261 00:19:03.261 fio_test: (groupid=0, jobs=1): err= 0: pid=72754: Wed Nov 20 08:31:48 2024 00:19:03.261 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(2611MiB/60002msec) 00:19:03.261 slat (usec): min=2, max=817, avg= 8.45, stdev= 2.60 00:19:03.261 clat (usec): min=1023, max=30250k, avg=6141.67, stdev=318210.72 00:19:03.261 lat (usec): min=1029, max=30250k, avg=6150.12, stdev=318210.71 00:19:03.261 clat percentiles (msec): 00:19:03.261 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:19:03.261 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:19:03.261 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 5], 00:19:03.261 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:19:03.261 | 99.99th=[17113] 00:19:03.261 bw ( KiB/s): min= 592, max=105552, per=100.00%, avg=87815.33, stdev=15614.20, samples=60 00:19:03.261 iops : min= 148, max=26388, avg=21953.80, stdev=3903.54, samples=60 00:19:03.261 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(2608MiB/60002msec); 0 zone resets 00:19:03.261 slat (usec): min=2, max=602, avg= 8.71, stdev= 2.58 00:19:03.261 clat (usec): min=986, max=30250k, avg=5334.64, stdev=271977.82 00:19:03.261 lat (usec): min=992, max=30250k, avg=5343.35, stdev=271977.81 00:19:03.261 clat percentiles (usec): 00:19:03.261 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2507], 00:19:03.261 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2868], 00:19:03.261 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3228], 95.00th=[ 4015], 00:19:03.261 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 8586], 99.95th=[ 9241], 00:19:03.261 | 99.99th=[13173] 00:19:03.261 bw ( KiB/s): min= 440, max=105160, per=100.00%, avg=87731.12, stdev=15493.13, samples=60 00:19:03.261 iops : min= 110, max=26290, avg=21932.75, stdev=3873.29, samples=60 00:19:03.261 lat (usec) : 1000=0.01% 00:19:03.261 lat (msec) : 2=0.48%, 4=94.48%, 10=5.02%, 20=0.01%, >=2000=0.01% 00:19:03.261 cpu : usr=6.17%, sys=19.03%, ctx=57037, majf=0, minf=13 00:19:03.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:03.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.261 issued rwts: total=668460,667767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.261 00:19:03.261 Run status group 0 (all jobs): 00:19:03.261 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=2611MiB (2738MB), run=60002-60002msec 00:19:03.261 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=2608MiB (2735MB), run=60002-60002msec 00:19:03.261 00:19:03.261 Disk stats (read/write): 00:19:03.261 ublkb1: ios=666008/665356, merge=0/0, ticks=4037918/3418572, in_queue=7456490, util=99.92% 00:19:03.261 08:31:48 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.261 [2024-11-20 08:31:48.077056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:03.261 [2024-11-20 08:31:48.118040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:03.261 [2024-11-20 08:31:48.118286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:03.261 [2024-11-20 08:31:48.127034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:03.261 [2024-11-20 08:31:48.127191] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:03.261 [2024-11-20 08:31:48.127202] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:19:03.261 08:31:48 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.261 [2024-11-20 08:31:48.142147] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:03.261 [2024-11-20 08:31:48.150010] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:03.261 [2024-11-20 08:31:48.150051] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:19:03.261 08:31:48 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:03.261 08:31:48 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:03.261 08:31:48 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 72856 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@957 -- # '[' -z 72856 ']' 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@961 -- # kill -0 72856 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@962 -- # uname 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72856 00:19:03.261 killing process with pid 72856 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72856' 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@976 -- # kill 72856 00:19:03.261 08:31:48 ublk_recovery -- common/autotest_common.sh@981 -- # wait 72856 00:19:03.261 [2024-11-20 08:31:49.914353] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:03.261 [2024-11-20 08:31:49.914639] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:03.830 00:19:03.830 real 1m6.311s 00:19:03.830 user 1m50.830s 00:19:03.830 sys 0m26.036s 00:19:03.830 08:31:51 ublk_recovery -- common/autotest_common.sh@1133 -- # xtrace_disable 00:19:03.830 ************************************ 00:19:03.830 END TEST ublk_recovery 00:19:03.830 ************************************ 00:19:03.830 08:31:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 08:31:51 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:04.089 08:31:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:04.089 08:31:51 -- common/autotest_common.sh@735 -- # xtrace_disable 00:19:04.089 08:31:51 -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 08:31:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:04.089 08:31:51 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:04.089 08:31:51 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:19:04.089 08:31:51 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:19:04.089 08:31:51 -- common/autotest_common.sh@10 -- # set +x 00:19:04.089 ************************************ 00:19:04.089 START TEST ftl 00:19:04.089 ************************************ 00:19:04.089 08:31:51 ftl -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:04.349 * Looking for test storage... 00:19:04.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1638 -- # lcov --version 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.349 08:31:51 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.349 08:31:51 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.349 08:31:51 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.349 08:31:51 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.349 08:31:51 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.349 08:31:51 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:04.349 08:31:51 ftl -- scripts/common.sh@345 -- # : 1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.349 08:31:51 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.349 08:31:51 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@353 -- # local d=1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.349 08:31:51 ftl -- scripts/common.sh@355 -- # echo 1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.349 08:31:51 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@353 -- # local d=2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.349 08:31:51 ftl -- scripts/common.sh@355 -- # echo 2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.349 08:31:51 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.349 08:31:51 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.349 08:31:51 ftl -- scripts/common.sh@368 -- # return 0 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:19:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.349 --rc genhtml_branch_coverage=1 00:19:04.349 --rc genhtml_function_coverage=1 00:19:04.349 --rc genhtml_legend=1 00:19:04.349 --rc geninfo_all_blocks=1 00:19:04.349 --rc geninfo_unexecuted_blocks=1 00:19:04.349 00:19:04.349 ' 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:19:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.349 --rc genhtml_branch_coverage=1 00:19:04.349 --rc genhtml_function_coverage=1 00:19:04.349 --rc genhtml_legend=1 00:19:04.349 --rc geninfo_all_blocks=1 00:19:04.349 --rc geninfo_unexecuted_blocks=1 00:19:04.349 00:19:04.349 ' 00:19:04.349 08:31:51 ftl -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:19:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.350 --rc genhtml_branch_coverage=1 00:19:04.350 --rc genhtml_function_coverage=1 00:19:04.350 --rc genhtml_legend=1 00:19:04.350 --rc geninfo_all_blocks=1 00:19:04.350 --rc geninfo_unexecuted_blocks=1 00:19:04.350 00:19:04.350 ' 00:19:04.350 08:31:51 ftl -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:19:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.350 --rc genhtml_branch_coverage=1 00:19:04.350 --rc genhtml_function_coverage=1 00:19:04.350 --rc genhtml_legend=1 00:19:04.350 --rc geninfo_all_blocks=1 00:19:04.350 --rc geninfo_unexecuted_blocks=1 00:19:04.350 00:19:04.350 ' 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:04.350 08:31:51 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:04.350 08:31:51 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.350 08:31:51 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.350 08:31:51 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:04.350 08:31:51 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:04.350 08:31:51 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.350 08:31:51 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.350 08:31:51 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.350 08:31:51 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.350 08:31:51 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.350 08:31:51 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:04.350 08:31:51 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:04.350 08:31:51 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.350 08:31:51 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.350 08:31:51 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:04.350 08:31:51 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.350 08:31:51 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.350 08:31:51 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.350 08:31:51 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.350 08:31:51 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:04.350 08:31:51 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:04.350 08:31:51 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.350 08:31:51 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:04.350 08:31:51 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:04.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:05.179 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.179 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.179 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.179 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.179 08:31:52 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73675 00:19:05.179 08:31:52 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:05.179 08:31:52 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73675 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@838 -- # '[' -z 73675 ']' 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:05.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:05.179 08:31:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:05.438 [2024-11-20 08:31:52.822319] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:05.438 [2024-11-20 08:31:52.822441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:19:05.698 [2024-11-20 08:31:53.002497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.698 [2024-11-20 08:31:53.127117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.266 08:31:53 ftl -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:06.266 08:31:53 ftl -- common/autotest_common.sh@871 -- # return 0 00:19:06.266 08:31:53 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:06.526 08:31:53 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:07.464 08:31:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:07.464 08:31:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:08.032 08:31:55 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:08.032 08:31:55 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:08.032 08:31:55 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@50 -- # break 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@63 -- # break 00:19:08.292 08:31:55 ftl -- ftl/ftl.sh@66 -- # killprocess 73675 00:19:08.292 08:31:55 ftl -- common/autotest_common.sh@957 -- # '[' -z 73675 ']' 00:19:08.292 08:31:55 ftl -- common/autotest_common.sh@961 -- # kill -0 73675 00:19:08.292 08:31:55 ftl -- common/autotest_common.sh@962 -- # uname 00:19:08.292 08:31:55 ftl -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73675 00:19:08.551 killing process with pid 73675 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73675' 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@976 -- # kill 73675 00:19:08.551 08:31:55 ftl -- common/autotest_common.sh@981 -- # wait 73675 00:19:11.090 08:31:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:11.090 08:31:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:11.090 08:31:58 ftl -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:19:11.090 08:31:58 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:19:11.090 08:31:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:11.090 ************************************ 00:19:11.090 START TEST ftl_fio_basic 00:19:11.090 ************************************ 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:11.090 * Looking for test storage... 00:19:11.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1638 -- # lcov --version 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:19:11.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.090 --rc genhtml_branch_coverage=1 00:19:11.090 --rc genhtml_function_coverage=1 00:19:11.090 --rc genhtml_legend=1 00:19:11.090 --rc geninfo_all_blocks=1 00:19:11.090 --rc geninfo_unexecuted_blocks=1 00:19:11.090 00:19:11.090 ' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:19:11.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.090 --rc genhtml_branch_coverage=1 00:19:11.090 --rc genhtml_function_coverage=1 00:19:11.090 --rc genhtml_legend=1 00:19:11.090 --rc geninfo_all_blocks=1 00:19:11.090 --rc geninfo_unexecuted_blocks=1 00:19:11.090 00:19:11.090 ' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:19:11.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.090 --rc genhtml_branch_coverage=1 00:19:11.090 --rc genhtml_function_coverage=1 00:19:11.090 --rc genhtml_legend=1 00:19:11.090 --rc geninfo_all_blocks=1 00:19:11.090 --rc geninfo_unexecuted_blocks=1 00:19:11.090 00:19:11.090 ' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:19:11.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.090 --rc genhtml_branch_coverage=1 00:19:11.090 --rc genhtml_function_coverage=1 00:19:11.090 --rc genhtml_legend=1 00:19:11.090 --rc geninfo_all_blocks=1 00:19:11.090 --rc geninfo_unexecuted_blocks=1 00:19:11.090 00:19:11.090 ' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=73835 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 73835 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # '[' -z 73835 ']' 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.090 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:11.091 08:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:11.349 [2024-11-20 08:31:58.734652] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:11.349 [2024-11-20 08:31:58.734773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73835 ] 00:19:11.609 [2024-11-20 08:31:58.917916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.609 [2024-11-20 08:31:59.048121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.609 [2024-11-20 08:31:59.048269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.609 [2024-11-20 08:31:59.048300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@871 -- # return 0 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:12.555 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1370 -- # local bdev_name=nvme0n1 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1371 -- # local bdev_info 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1372 -- # local bs 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1373 -- # local nb 00:19:12.814 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:13.073 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:19:13.073 { 00:19:13.073 "name": "nvme0n1", 00:19:13.073 "aliases": [ 00:19:13.073 "e216f077-1025-4bf9-9a85-cac90b05fdb2" 00:19:13.073 ], 00:19:13.073 "product_name": "NVMe disk", 00:19:13.073 "block_size": 4096, 00:19:13.073 "num_blocks": 1310720, 00:19:13.073 "uuid": "e216f077-1025-4bf9-9a85-cac90b05fdb2", 00:19:13.073 "numa_id": -1, 00:19:13.073 "assigned_rate_limits": { 00:19:13.073 "rw_ios_per_sec": 0, 00:19:13.073 "rw_mbytes_per_sec": 0, 00:19:13.073 "r_mbytes_per_sec": 0, 00:19:13.073 "w_mbytes_per_sec": 0 00:19:13.073 }, 00:19:13.073 "claimed": false, 00:19:13.073 "zoned": false, 00:19:13.073 "supported_io_types": { 00:19:13.073 "read": true, 00:19:13.073 "write": true, 00:19:13.074 "unmap": true, 00:19:13.074 "flush": true, 00:19:13.074 "reset": true, 00:19:13.074 "nvme_admin": true, 00:19:13.074 "nvme_io": true, 00:19:13.074 "nvme_io_md": false, 00:19:13.074 "write_zeroes": true, 00:19:13.074 "zcopy": false, 00:19:13.074 "get_zone_info": false, 00:19:13.074 "zone_management": false, 00:19:13.074 "zone_append": false, 00:19:13.074 "compare": true, 00:19:13.074 "compare_and_write": false, 00:19:13.074 "abort": true, 00:19:13.074 "seek_hole": false, 00:19:13.074 "seek_data": false, 00:19:13.074 "copy": true, 00:19:13.074 "nvme_iov_md": false 00:19:13.074 }, 00:19:13.074 "driver_specific": { 00:19:13.074 "nvme": [ 00:19:13.074 { 00:19:13.074 "pci_address": "0000:00:11.0", 00:19:13.074 "trid": { 00:19:13.074 "trtype": "PCIe", 00:19:13.074 "traddr": "0000:00:11.0" 00:19:13.074 }, 00:19:13.074 "ctrlr_data": { 00:19:13.074 "cntlid": 0, 00:19:13.074 "vendor_id": "0x1b36", 00:19:13.074 "model_number": "QEMU NVMe Ctrl", 00:19:13.074 "serial_number": "12341", 00:19:13.074 "firmware_revision": "8.0.0", 00:19:13.074 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:13.074 "oacs": { 00:19:13.074 "security": 0, 00:19:13.074 "format": 1, 00:19:13.074 "firmware": 0, 00:19:13.074 "ns_manage": 1 00:19:13.074 }, 00:19:13.074 "multi_ctrlr": false, 00:19:13.074 "ana_reporting": false 00:19:13.074 }, 00:19:13.074 "vs": { 00:19:13.074 "nvme_version": "1.4" 00:19:13.074 }, 00:19:13.074 "ns_data": { 00:19:13.074 "id": 1, 00:19:13.074 "can_share": false 00:19:13.074 } 00:19:13.074 } 00:19:13.074 ], 00:19:13.074 "mp_policy": "active_passive" 00:19:13.074 } 00:19:13.074 } 00:19:13.074 ]' 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # bs=4096 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # nb=1310720 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # echo 5120 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:13.074 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:13.333 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:13.333 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:13.591 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a1b5b481-f83c-43e9-8f00-e20525eb2655 00:19:13.591 08:32:00 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a1b5b481-f83c-43e9-8f00-e20525eb2655 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1370 -- # local bdev_name=deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1371 -- # local bdev_info 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1372 -- # local bs 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1373 -- # local nb 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:19:13.850 { 00:19:13.850 "name": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:13.850 "aliases": [ 00:19:13.850 "lvs/nvme0n1p0" 00:19:13.850 ], 00:19:13.850 "product_name": "Logical Volume", 00:19:13.850 "block_size": 4096, 00:19:13.850 "num_blocks": 26476544, 00:19:13.850 "uuid": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:13.850 "assigned_rate_limits": { 00:19:13.850 "rw_ios_per_sec": 0, 00:19:13.850 "rw_mbytes_per_sec": 0, 00:19:13.850 "r_mbytes_per_sec": 0, 00:19:13.850 "w_mbytes_per_sec": 0 00:19:13.850 }, 00:19:13.850 "claimed": false, 00:19:13.850 "zoned": false, 00:19:13.850 "supported_io_types": { 00:19:13.850 "read": true, 00:19:13.850 "write": true, 00:19:13.850 "unmap": true, 00:19:13.850 "flush": false, 00:19:13.850 "reset": true, 00:19:13.850 "nvme_admin": false, 00:19:13.850 "nvme_io": false, 00:19:13.850 "nvme_io_md": false, 00:19:13.850 "write_zeroes": true, 00:19:13.850 "zcopy": false, 00:19:13.850 "get_zone_info": false, 00:19:13.850 "zone_management": false, 00:19:13.850 "zone_append": false, 00:19:13.850 "compare": false, 00:19:13.850 "compare_and_write": false, 00:19:13.850 "abort": false, 00:19:13.850 "seek_hole": true, 00:19:13.850 "seek_data": true, 00:19:13.850 "copy": false, 00:19:13.850 "nvme_iov_md": false 00:19:13.850 }, 00:19:13.850 "driver_specific": { 00:19:13.850 "lvol": { 00:19:13.850 "lvol_store_uuid": "a1b5b481-f83c-43e9-8f00-e20525eb2655", 00:19:13.850 "base_bdev": "nvme0n1", 00:19:13.850 "thin_provision": true, 00:19:13.850 "num_allocated_clusters": 0, 00:19:13.850 "snapshot": false, 00:19:13.850 "clone": false, 00:19:13.850 "esnap_clone": false 00:19:13.850 } 00:19:13.850 } 00:19:13.850 } 00:19:13.850 ]' 00:19:13.850 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # bs=4096 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # nb=26476544 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # echo 103424 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:14.109 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1370 -- # local bdev_name=deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1371 -- # local bdev_info 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1372 -- # local bs 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1373 -- # local nb 00:19:14.366 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.625 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:19:14.625 { 00:19:14.625 "name": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:14.625 "aliases": [ 00:19:14.625 "lvs/nvme0n1p0" 00:19:14.625 ], 00:19:14.625 "product_name": "Logical Volume", 00:19:14.625 "block_size": 4096, 00:19:14.625 "num_blocks": 26476544, 00:19:14.625 "uuid": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:14.625 "assigned_rate_limits": { 00:19:14.625 "rw_ios_per_sec": 0, 00:19:14.625 "rw_mbytes_per_sec": 0, 00:19:14.625 "r_mbytes_per_sec": 0, 00:19:14.625 "w_mbytes_per_sec": 0 00:19:14.625 }, 00:19:14.625 "claimed": false, 00:19:14.625 "zoned": false, 00:19:14.625 "supported_io_types": { 00:19:14.625 "read": true, 00:19:14.625 "write": true, 00:19:14.625 "unmap": true, 00:19:14.625 "flush": false, 00:19:14.625 "reset": true, 00:19:14.625 "nvme_admin": false, 00:19:14.625 "nvme_io": false, 00:19:14.625 "nvme_io_md": false, 00:19:14.625 "write_zeroes": true, 00:19:14.625 "zcopy": false, 00:19:14.625 "get_zone_info": false, 00:19:14.625 "zone_management": false, 00:19:14.625 "zone_append": false, 00:19:14.625 "compare": false, 00:19:14.625 "compare_and_write": false, 00:19:14.625 "abort": false, 00:19:14.625 "seek_hole": true, 00:19:14.625 "seek_data": true, 00:19:14.625 "copy": false, 00:19:14.625 "nvme_iov_md": false 00:19:14.625 }, 00:19:14.625 "driver_specific": { 00:19:14.625 "lvol": { 00:19:14.625 "lvol_store_uuid": "a1b5b481-f83c-43e9-8f00-e20525eb2655", 00:19:14.625 "base_bdev": "nvme0n1", 00:19:14.625 "thin_provision": true, 00:19:14.625 "num_allocated_clusters": 0, 00:19:14.625 "snapshot": false, 00:19:14.625 "clone": false, 00:19:14.625 "esnap_clone": false 00:19:14.625 } 00:19:14.625 } 00:19:14.625 } 00:19:14.625 ]' 00:19:14.625 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:19:14.625 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # bs=4096 00:19:14.625 08:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:19:14.625 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # nb=26476544 00:19:14.625 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:19:14.625 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # echo 103424 00:19:14.625 08:32:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:14.625 08:32:02 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:14.884 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1370 -- # local bdev_name=deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1371 -- # local bdev_info 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1372 -- # local bs 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1373 -- # local nb 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b deeb0da8-2be7-42bc-a44f-148a214f16ca 00:19:14.884 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:19:14.884 { 00:19:14.884 "name": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:14.884 "aliases": [ 00:19:14.884 "lvs/nvme0n1p0" 00:19:14.884 ], 00:19:14.884 "product_name": "Logical Volume", 00:19:14.884 "block_size": 4096, 00:19:14.884 "num_blocks": 26476544, 00:19:14.884 "uuid": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:14.884 "assigned_rate_limits": { 00:19:14.884 "rw_ios_per_sec": 0, 00:19:14.884 "rw_mbytes_per_sec": 0, 00:19:14.884 "r_mbytes_per_sec": 0, 00:19:14.884 "w_mbytes_per_sec": 0 00:19:14.884 }, 00:19:14.884 "claimed": false, 00:19:14.884 "zoned": false, 00:19:14.884 "supported_io_types": { 00:19:14.884 "read": true, 00:19:14.885 "write": true, 00:19:14.885 "unmap": true, 00:19:14.885 "flush": false, 00:19:14.885 "reset": true, 00:19:14.885 "nvme_admin": false, 00:19:14.885 "nvme_io": false, 00:19:14.885 "nvme_io_md": false, 00:19:14.885 "write_zeroes": true, 00:19:14.885 "zcopy": false, 00:19:14.885 "get_zone_info": false, 00:19:14.885 "zone_management": false, 00:19:14.885 "zone_append": false, 00:19:14.885 "compare": false, 00:19:14.885 "compare_and_write": false, 00:19:14.885 "abort": false, 00:19:14.885 "seek_hole": true, 00:19:14.885 "seek_data": true, 00:19:14.885 "copy": false, 00:19:14.885 "nvme_iov_md": false 00:19:14.885 }, 00:19:14.885 "driver_specific": { 00:19:14.885 "lvol": { 00:19:14.885 "lvol_store_uuid": "a1b5b481-f83c-43e9-8f00-e20525eb2655", 00:19:14.885 "base_bdev": "nvme0n1", 00:19:14.885 "thin_provision": true, 00:19:14.885 "num_allocated_clusters": 0, 00:19:14.885 "snapshot": false, 00:19:14.885 "clone": false, 00:19:14.885 "esnap_clone": false 00:19:14.885 } 00:19:14.885 } 00:19:14.885 } 00:19:14.885 ]' 00:19:14.885 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # bs=4096 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # nb=26476544 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # echo 103424 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:15.144 08:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d deeb0da8-2be7-42bc-a44f-148a214f16ca -c nvc0n1p0 --l2p_dram_limit 60 00:19:15.144 [2024-11-20 08:32:02.695313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.144 [2024-11-20 08:32:02.695365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:15.144 [2024-11-20 08:32:02.695386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:15.144 [2024-11-20 08:32:02.695398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.144 [2024-11-20 08:32:02.695505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.144 [2024-11-20 08:32:02.695525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.144 [2024-11-20 08:32:02.695540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:15.144 [2024-11-20 08:32:02.695550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.144 [2024-11-20 08:32:02.695616] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:15.144 [2024-11-20 08:32:02.696674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:15.144 [2024-11-20 08:32:02.696716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.144 [2024-11-20 08:32:02.696727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.144 [2024-11-20 08:32:02.696743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:19:15.145 [2024-11-20 08:32:02.696754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.145 [2024-11-20 08:32:02.696845] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 32cfc5f6-aca4-487d-82cd-f3f78dc801d8 00:19:15.145 [2024-11-20 08:32:02.699321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.145 [2024-11-20 08:32:02.699364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:15.145 [2024-11-20 08:32:02.699378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:15.145 [2024-11-20 08:32:02.699393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.712942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.404 [2024-11-20 08:32:02.713169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.404 [2024-11-20 08:32:02.713195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.462 ms 00:19:15.404 [2024-11-20 08:32:02.713210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.713390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.404 [2024-11-20 08:32:02.713409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.404 [2024-11-20 08:32:02.713421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:19:15.404 [2024-11-20 08:32:02.713441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.713523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.404 [2024-11-20 08:32:02.713539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:15.404 [2024-11-20 08:32:02.713551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:15.404 [2024-11-20 08:32:02.713565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.713600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:15.404 [2024-11-20 08:32:02.719069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.404 [2024-11-20 08:32:02.719100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.404 [2024-11-20 08:32:02.719118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.483 ms 00:19:15.404 [2024-11-20 08:32:02.719132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.719179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.404 [2024-11-20 08:32:02.719191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:15.404 [2024-11-20 08:32:02.719205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:15.404 [2024-11-20 08:32:02.719215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.404 [2024-11-20 08:32:02.719270] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:15.404 [2024-11-20 08:32:02.719464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:15.404 [2024-11-20 08:32:02.719490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:15.404 [2024-11-20 08:32:02.719504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:15.404 [2024-11-20 08:32:02.719520] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:15.404 [2024-11-20 08:32:02.719532] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:15.404 [2024-11-20 08:32:02.719546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:15.404 [2024-11-20 08:32:02.719556] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:15.404 [2024-11-20 08:32:02.719568] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:15.404 [2024-11-20 08:32:02.719578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:15.405 [2024-11-20 08:32:02.719592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.405 [2024-11-20 08:32:02.719605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:15.405 [2024-11-20 08:32:02.719620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:15.405 [2024-11-20 08:32:02.719630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.405 [2024-11-20 08:32:02.719720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.405 [2024-11-20 08:32:02.719731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:15.405 [2024-11-20 08:32:02.719744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:15.405 [2024-11-20 08:32:02.719753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.405 [2024-11-20 08:32:02.719871] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:15.405 [2024-11-20 08:32:02.719883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:15.405 [2024-11-20 08:32:02.719900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.405 [2024-11-20 08:32:02.719910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.719922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:15.405 [2024-11-20 08:32:02.719932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.719944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:15.405 [2024-11-20 08:32:02.719953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:15.405 [2024-11-20 08:32:02.719965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:15.405 [2024-11-20 08:32:02.719978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.405 [2024-11-20 08:32:02.719990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:15.405 [2024-11-20 08:32:02.720017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:15.405 [2024-11-20 08:32:02.720029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.405 [2024-11-20 08:32:02.720038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:15.405 [2024-11-20 08:32:02.720051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:15.405 [2024-11-20 08:32:02.720061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:15.405 [2024-11-20 08:32:02.720086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:15.405 [2024-11-20 08:32:02.720119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:15.405 [2024-11-20 08:32:02.720170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:15.405 [2024-11-20 08:32:02.720220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:15.405 [2024-11-20 08:32:02.720251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:15.405 [2024-11-20 08:32:02.720287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.405 [2024-11-20 08:32:02.720307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:15.405 [2024-11-20 08:32:02.720335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:15.405 [2024-11-20 08:32:02.720347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.405 [2024-11-20 08:32:02.720356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:15.405 [2024-11-20 08:32:02.720368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:15.405 [2024-11-20 08:32:02.720376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:15.405 [2024-11-20 08:32:02.720399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:15.405 [2024-11-20 08:32:02.720413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:15.405 [2024-11-20 08:32:02.720437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:15.405 [2024-11-20 08:32:02.720447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.405 [2024-11-20 08:32:02.720488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:15.405 [2024-11-20 08:32:02.720504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:15.405 [2024-11-20 08:32:02.720513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:15.405 [2024-11-20 08:32:02.720526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:15.405 [2024-11-20 08:32:02.720535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:15.405 [2024-11-20 08:32:02.720548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:15.405 [2024-11-20 08:32:02.720567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:15.405 [2024-11-20 08:32:02.720583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:15.405 [2024-11-20 08:32:02.720608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:15.405 [2024-11-20 08:32:02.720619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:15.405 [2024-11-20 08:32:02.720632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:15.405 [2024-11-20 08:32:02.720643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:15.405 [2024-11-20 08:32:02.720657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:15.405 [2024-11-20 08:32:02.720667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:15.405 [2024-11-20 08:32:02.720681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:15.405 [2024-11-20 08:32:02.720691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:15.405 [2024-11-20 08:32:02.720708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:15.405 [2024-11-20 08:32:02.720769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:15.405 [2024-11-20 08:32:02.720783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:15.405 [2024-11-20 08:32:02.720810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:15.405 [2024-11-20 08:32:02.720823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:15.405 [2024-11-20 08:32:02.720836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:15.405 [2024-11-20 08:32:02.720847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.405 [2024-11-20 08:32:02.720861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:15.405 [2024-11-20 08:32:02.720872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:19:15.405 [2024-11-20 08:32:02.720886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.405 [2024-11-20 08:32:02.720962] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:15.405 [2024-11-20 08:32:02.720983] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:20.699 [2024-11-20 08:32:08.188715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.699 [2024-11-20 08:32:08.188791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:20.699 [2024-11-20 08:32:08.188814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5476.631 ms 00:19:20.699 [2024-11-20 08:32:08.188828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.699 [2024-11-20 08:32:08.234383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.700 [2024-11-20 08:32:08.234445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.700 [2024-11-20 08:32:08.234463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.278 ms 00:19:20.700 [2024-11-20 08:32:08.234478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.700 [2024-11-20 08:32:08.234646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.700 [2024-11-20 08:32:08.234663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.700 [2024-11-20 08:32:08.234674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:20.700 [2024-11-20 08:32:08.234691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.296259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.296314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.958 [2024-11-20 08:32:08.296337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.601 ms 00:19:20.958 [2024-11-20 08:32:08.296359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.296424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.296443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.958 [2024-11-20 08:32:08.296457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.958 [2024-11-20 08:32:08.296474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.297331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.297356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.958 [2024-11-20 08:32:08.297371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:19:20.958 [2024-11-20 08:32:08.297393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.297561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.297590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.958 [2024-11-20 08:32:08.297605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:20.958 [2024-11-20 08:32:08.297626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.323886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.323926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.958 [2024-11-20 08:32:08.323941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:19:20.958 [2024-11-20 08:32:08.323954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.337525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.958 [2024-11-20 08:32:08.362604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.362892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.958 [2024-11-20 08:32:08.362924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.576 ms 00:19:20.958 [2024-11-20 08:32:08.362939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.479848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.479907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:20.958 [2024-11-20 08:32:08.479931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.012 ms 00:19:20.958 [2024-11-20 08:32:08.479942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.480177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.480192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.958 [2024-11-20 08:32:08.480210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:20.958 [2024-11-20 08:32:08.480220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.958 [2024-11-20 08:32:08.515107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.958 [2024-11-20 08:32:08.515148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:20.958 [2024-11-20 08:32:08.515166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.860 ms 00:19:20.958 [2024-11-20 08:32:08.515177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.549881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.217 [2024-11-20 08:32:08.549915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:21.217 [2024-11-20 08:32:08.549932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.700 ms 00:19:21.217 [2024-11-20 08:32:08.549942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.550789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.217 [2024-11-20 08:32:08.550820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:21.217 [2024-11-20 08:32:08.550836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:19:21.217 [2024-11-20 08:32:08.550847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.670525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.217 [2024-11-20 08:32:08.670566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:21.217 [2024-11-20 08:32:08.670588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.792 ms 00:19:21.217 [2024-11-20 08:32:08.670603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.707282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.217 [2024-11-20 08:32:08.707509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:21.217 [2024-11-20 08:32:08.707538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.620 ms 00:19:21.217 [2024-11-20 08:32:08.707549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.741623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.217 [2024-11-20 08:32:08.741658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:21.217 [2024-11-20 08:32:08.741674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.036 ms 00:19:21.217 [2024-11-20 08:32:08.741685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.217 [2024-11-20 08:32:08.776620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.476 [2024-11-20 08:32:08.776800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:21.476 [2024-11-20 08:32:08.776827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.931 ms 00:19:21.476 [2024-11-20 08:32:08.776837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.476 [2024-11-20 08:32:08.776945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.476 [2024-11-20 08:32:08.776958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:21.476 [2024-11-20 08:32:08.776977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:21.476 [2024-11-20 08:32:08.777008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.476 [2024-11-20 08:32:08.777164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.476 [2024-11-20 08:32:08.777179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:21.476 [2024-11-20 08:32:08.777194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:21.476 [2024-11-20 08:32:08.777205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.476 [2024-11-20 08:32:08.778659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6092.670 ms, result 0 00:19:21.476 { 00:19:21.476 "name": "ftl0", 00:19:21.476 "uuid": "32cfc5f6-aca4-487d-82cd-f3f78dc801d8" 00:19:21.476 } 00:19:21.476 08:32:08 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:21.476 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # local bdev_name=ftl0 00:19:21.476 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # local bdev_timeout= 00:19:21.476 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # local i 00:19:21.477 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # [[ -z '' ]] 00:19:21.477 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # bdev_timeout=2000 00:19:21.477 08:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:21.477 08:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:21.736 [ 00:19:21.736 { 00:19:21.736 "name": "ftl0", 00:19:21.736 "aliases": [ 00:19:21.736 "32cfc5f6-aca4-487d-82cd-f3f78dc801d8" 00:19:21.736 ], 00:19:21.736 "product_name": "FTL disk", 00:19:21.736 "block_size": 4096, 00:19:21.736 "num_blocks": 20971520, 00:19:21.736 "uuid": "32cfc5f6-aca4-487d-82cd-f3f78dc801d8", 00:19:21.736 "assigned_rate_limits": { 00:19:21.736 "rw_ios_per_sec": 0, 00:19:21.736 "rw_mbytes_per_sec": 0, 00:19:21.736 "r_mbytes_per_sec": 0, 00:19:21.736 "w_mbytes_per_sec": 0 00:19:21.736 }, 00:19:21.736 "claimed": false, 00:19:21.736 "zoned": false, 00:19:21.736 "supported_io_types": { 00:19:21.736 "read": true, 00:19:21.736 "write": true, 00:19:21.736 "unmap": true, 00:19:21.736 "flush": true, 00:19:21.736 "reset": false, 00:19:21.736 "nvme_admin": false, 00:19:21.736 "nvme_io": false, 00:19:21.736 "nvme_io_md": false, 00:19:21.736 "write_zeroes": true, 00:19:21.736 "zcopy": false, 00:19:21.736 "get_zone_info": false, 00:19:21.736 "zone_management": false, 00:19:21.736 "zone_append": false, 00:19:21.736 "compare": false, 00:19:21.736 "compare_and_write": false, 00:19:21.736 "abort": false, 00:19:21.736 "seek_hole": false, 00:19:21.736 "seek_data": false, 00:19:21.736 "copy": false, 00:19:21.736 "nvme_iov_md": false 00:19:21.736 }, 00:19:21.736 "driver_specific": { 00:19:21.736 "ftl": { 00:19:21.736 "base_bdev": "deeb0da8-2be7-42bc-a44f-148a214f16ca", 00:19:21.736 "cache": "nvc0n1p0" 00:19:21.736 } 00:19:21.736 } 00:19:21.736 } 00:19:21.736 ] 00:19:21.736 08:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@914 -- # return 0 00:19:21.736 08:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:21.736 08:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:21.994 08:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:21.994 08:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:22.253 [2024-11-20 08:32:09.625625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.625793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.253 [2024-11-20 08:32:09.625817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.253 [2024-11-20 08:32:09.625832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.625891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.253 [2024-11-20 08:32:09.630625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.630657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.253 [2024-11-20 08:32:09.630673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:19:22.253 [2024-11-20 08:32:09.630683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.631196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.631215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.253 [2024-11-20 08:32:09.631229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:19:22.253 [2024-11-20 08:32:09.631239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.633572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.633599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.253 [2024-11-20 08:32:09.633612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:19:22.253 [2024-11-20 08:32:09.633622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.638311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.638341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.253 [2024-11-20 08:32:09.638355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:19:22.253 [2024-11-20 08:32:09.638365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.671961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.672113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.253 [2024-11-20 08:32:09.672138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.565 ms 00:19:22.253 [2024-11-20 08:32:09.672148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.695516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.695553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.253 [2024-11-20 08:32:09.695570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:19:22.253 [2024-11-20 08:32:09.695583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.695822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.695837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.253 [2024-11-20 08:32:09.695850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:22.253 [2024-11-20 08:32:09.695860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.253 [2024-11-20 08:32:09.730193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.253 [2024-11-20 08:32:09.730359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.254 [2024-11-20 08:32:09.730386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.345 ms 00:19:22.254 [2024-11-20 08:32:09.730396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.254 [2024-11-20 08:32:09.764804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.254 [2024-11-20 08:32:09.764838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.254 [2024-11-20 08:32:09.764854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.357 ms 00:19:22.254 [2024-11-20 08:32:09.764864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.254 [2024-11-20 08:32:09.799201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.254 [2024-11-20 08:32:09.799234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.254 [2024-11-20 08:32:09.799249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.332 ms 00:19:22.254 [2024-11-20 08:32:09.799259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.514 [2024-11-20 08:32:09.833296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.514 [2024-11-20 08:32:09.833328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.514 [2024-11-20 08:32:09.833344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.932 ms 00:19:22.514 [2024-11-20 08:32:09.833353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.514 [2024-11-20 08:32:09.833416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.514 [2024-11-20 08:32:09.833432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.514 [2024-11-20 08:32:09.833704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.833982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.515 [2024-11-20 08:32:09.834739] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.515 [2024-11-20 08:32:09.834752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32cfc5f6-aca4-487d-82cd-f3f78dc801d8 00:19:22.515 [2024-11-20 08:32:09.834764] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.515 [2024-11-20 08:32:09.834779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.515 [2024-11-20 08:32:09.834790] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.515 [2024-11-20 08:32:09.834806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.515 [2024-11-20 08:32:09.834816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.516 [2024-11-20 08:32:09.834829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.516 [2024-11-20 08:32:09.834840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.516 [2024-11-20 08:32:09.834852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.516 [2024-11-20 08:32:09.834861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.516 [2024-11-20 08:32:09.834874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.516 [2024-11-20 08:32:09.834885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.516 [2024-11-20 08:32:09.834898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:19:22.516 [2024-11-20 08:32:09.834908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.854362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.516 [2024-11-20 08:32:09.854397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.516 [2024-11-20 08:32:09.854412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.408 ms 00:19:22.516 [2024-11-20 08:32:09.854422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.855024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.516 [2024-11-20 08:32:09.855043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.516 [2024-11-20 08:32:09.855057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:19:22.516 [2024-11-20 08:32:09.855068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.923911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.516 [2024-11-20 08:32:09.923948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.516 [2024-11-20 08:32:09.923964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.516 [2024-11-20 08:32:09.923975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.924063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.516 [2024-11-20 08:32:09.924075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.516 [2024-11-20 08:32:09.924088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.516 [2024-11-20 08:32:09.924098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.924239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.516 [2024-11-20 08:32:09.924253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.516 [2024-11-20 08:32:09.924271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.516 [2024-11-20 08:32:09.924282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:09.924322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.516 [2024-11-20 08:32:09.924343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.516 [2024-11-20 08:32:09.924356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.516 [2024-11-20 08:32:09.924366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.516 [2024-11-20 08:32:10.060666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.516 [2024-11-20 08:32:10.060932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.516 [2024-11-20 08:32:10.060964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.516 [2024-11-20 08:32:10.060976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.776 [2024-11-20 08:32:10.161203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.776 [2024-11-20 08:32:10.161404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.776 [2024-11-20 08:32:10.161548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.776 [2024-11-20 08:32:10.161731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.776 [2024-11-20 08:32:10.161841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.161916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.161927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.776 [2024-11-20 08:32:10.161940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.161950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.162055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.776 [2024-11-20 08:32:10.162069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.776 [2024-11-20 08:32:10.162082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.776 [2024-11-20 08:32:10.162092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 08:32:10.162329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.519 ms, result 0 00:19:22.776 true 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 73835 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' -z 73835 ']' 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@961 -- # kill -0 73835 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # uname 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73835 00:19:22.776 killing process with pid 73835 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73835' 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # kill 73835 00:19:22.776 08:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@981 -- # wait 73835 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # local sanitizers 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # shift 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local asan_lib= 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # grep libasan 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # break 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:28.052 08:32:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:28.052 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:28.052 fio-3.35 00:19:28.052 Starting 1 thread 00:19:34.625 00:19:34.625 test: (groupid=0, jobs=1): err= 0: pid=74075: Wed Nov 20 08:32:21 2024 00:19:34.625 read: IOPS=823, BW=54.7MiB/s (57.3MB/s)(255MiB/4656msec) 00:19:34.625 slat (usec): min=6, max=154, avg=10.20, stdev= 3.81 00:19:34.625 clat (usec): min=363, max=1602, avg=542.00, stdev=53.30 00:19:34.625 lat (usec): min=372, max=1612, avg=552.20, stdev=53.76 00:19:34.625 clat percentiles (usec): 00:19:34.625 | 1.00th=[ 429], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 498], 00:19:34.625 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:19:34.625 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 619], 00:19:34.625 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 799], 99.95th=[ 1156], 00:19:34.625 | 99.99th=[ 1598] 00:19:34.625 write: IOPS=829, BW=55.1MiB/s (57.7MB/s)(256MiB/4651msec); 0 zone resets 00:19:34.625 slat (nsec): min=16617, max=70521, avg=25831.45, stdev=5496.37 00:19:34.625 clat (usec): min=387, max=1904, avg=620.99, stdev=64.62 00:19:34.625 lat (usec): min=414, max=1950, avg=646.83, stdev=64.99 00:19:34.625 clat percentiles (usec): 00:19:34.625 | 1.00th=[ 490], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 570], 00:19:34.625 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 644], 00:19:34.625 | 70.00th=[ 652], 80.00th=[ 660], 90.00th=[ 676], 95.00th=[ 685], 00:19:34.625 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 1020], 99.95th=[ 1045], 00:19:34.625 | 99.99th=[ 1909] 00:19:34.625 bw ( KiB/s): min=55080, max=59024, per=100.00%, avg=56409.78, stdev=1192.96, samples=9 00:19:34.625 iops : min= 810, max= 868, avg=829.56, stdev=17.54, samples=9 00:19:34.625 lat (usec) : 500=10.86%, 750=88.11%, 1000=0.91% 00:19:34.625 lat (msec) : 2=0.12% 00:19:34.625 cpu : usr=99.21%, sys=0.06%, ctx=7, majf=0, minf=1169 00:19:34.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.625 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:34.625 00:19:34.625 Run status group 0 (all jobs): 00:19:34.625 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=255MiB (267MB), run=4656-4656msec 00:19:34.625 WRITE: bw=55.1MiB/s (57.7MB/s), 55.1MiB/s-55.1MiB/s (57.7MB/s-57.7MB/s), io=256MiB (269MB), run=4651-4651msec 00:19:35.563 ----------------------------------------------------- 00:19:35.563 Suppressions used: 00:19:35.563 count bytes template 00:19:35.563 1 5 /usr/src/fio/parse.c 00:19:35.563 1 8 libtcmalloc_minimal.so 00:19:35.563 1 904 libcrypto.so 00:19:35.563 ----------------------------------------------------- 00:19:35.563 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@735 -- # xtrace_disable 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # local sanitizers 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # shift 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local asan_lib= 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # grep libasan 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # break 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:35.821 08:32:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:36.079 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:36.079 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:36.079 fio-3.35 00:19:36.079 Starting 2 threads 00:20:02.629 00:20:02.629 first_half: (groupid=0, jobs=1): err= 0: pid=74189: Wed Nov 20 08:32:49 2024 00:20:02.629 read: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(256MiB/25059msec) 00:20:02.629 slat (usec): min=3, max=424, avg= 9.72, stdev= 4.52 00:20:02.629 clat (usec): min=635, max=278386, avg=41108.24, stdev=25153.21 00:20:02.629 lat (usec): min=639, max=278398, avg=41117.95, stdev=25154.02 00:20:02.629 clat percentiles (msec): 00:20:02.629 | 1.00th=[ 8], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:20:02.629 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:20:02.629 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 43], 95.00th=[ 82], 00:20:02.629 | 99.00th=[ 169], 99.50th=[ 186], 99.90th=[ 218], 99.95th=[ 243], 00:20:02.629 | 99.99th=[ 271] 00:20:02.629 write: IOPS=2618, BW=10.2MiB/s (10.7MB/s)(256MiB/25031msec); 0 zone resets 00:20:02.629 slat (usec): min=4, max=1495, avg= 9.62, stdev=11.17 00:20:02.629 clat (usec): min=360, max=45036, avg=7837.42, stdev=7115.33 00:20:02.629 lat (usec): min=366, max=45050, avg=7847.03, stdev=7115.61 00:20:02.629 clat percentiles (usec): 00:20:02.629 | 1.00th=[ 1172], 5.00th=[ 1582], 10.00th=[ 1909], 20.00th=[ 3163], 00:20:02.629 | 30.00th=[ 4621], 40.00th=[ 5735], 50.00th=[ 6521], 60.00th=[ 7242], 00:20:02.629 | 70.00th=[ 8094], 80.00th=[ 9634], 90.00th=[13566], 95.00th=[20579], 00:20:02.629 | 99.00th=[40109], 99.50th=[41157], 99.90th=[42730], 99.95th=[43254], 00:20:02.629 | 99.99th=[44303] 00:20:02.629 bw ( KiB/s): min= 424, max=41072, per=100.00%, avg=21698.42, stdev=14832.09, samples=24 00:20:02.629 iops : min= 106, max=10268, avg=5424.58, stdev=3708.00, samples=24 00:20:02.629 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.12% 00:20:02.629 lat (msec) : 2=5.53%, 4=7.10%, 10=29.25%, 20=6.92%, 50=47.37% 00:20:02.629 lat (msec) : 100=1.65%, 250=2.01%, 500=0.02% 00:20:02.629 cpu : usr=98.66%, sys=0.38%, ctx=51, majf=0, minf=5524 00:20:02.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:02.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.629 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.629 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.629 second_half: (groupid=0, jobs=1): err= 0: pid=74190: Wed Nov 20 08:32:49 2024 00:20:02.629 read: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(256MiB/24884msec) 00:20:02.629 slat (nsec): min=3451, max=82910, avg=8928.76, stdev=3935.44 00:20:02.629 clat (msec): min=11, max=216, avg=41.82, stdev=23.60 00:20:02.629 lat (msec): min=11, max=216, avg=41.83, stdev=23.60 00:20:02.629 clat percentiles (msec): 00:20:02.629 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:20:02.629 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:20:02.629 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 75], 00:20:02.629 | 99.00th=[ 171], 99.50th=[ 184], 99.90th=[ 203], 99.95th=[ 205], 00:20:02.629 | 99.99th=[ 215] 00:20:02.629 write: IOPS=2648, BW=10.3MiB/s (10.8MB/s)(256MiB/24748msec); 0 zone resets 00:20:02.629 slat (usec): min=4, max=777, avg= 9.09, stdev= 9.09 00:20:02.629 clat (usec): min=522, max=40204, avg=6776.68, stdev=3949.83 00:20:02.629 lat (usec): min=530, max=40209, avg=6785.77, stdev=3950.63 00:20:02.629 clat percentiles (usec): 00:20:02.629 | 1.00th=[ 1319], 5.00th=[ 2040], 10.00th=[ 2638], 20.00th=[ 3720], 00:20:02.629 | 30.00th=[ 4817], 40.00th=[ 5473], 50.00th=[ 6194], 60.00th=[ 6718], 00:20:02.629 | 70.00th=[ 7570], 80.00th=[ 8848], 90.00th=[12256], 95.00th=[13698], 00:20:02.629 | 99.00th=[19792], 99.50th=[27132], 99.90th=[35390], 99.95th=[38011], 00:20:02.629 | 99.99th=[39584] 00:20:02.629 bw ( KiB/s): min= 360, max=46096, per=100.00%, avg=21757.33, stdev=13440.09, samples=24 00:20:02.629 iops : min= 90, max=11524, avg=5439.33, stdev=3360.02, samples=24 00:20:02.629 lat (usec) : 750=0.05%, 1000=0.11% 00:20:02.629 lat (msec) : 2=2.23%, 4=9.06%, 10=30.26%, 20=7.86%, 50=46.65% 00:20:02.629 lat (msec) : 100=1.84%, 250=1.95% 00:20:02.629 cpu : usr=99.16%, sys=0.24%, ctx=35, majf=0, minf=5579 00:20:02.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:02.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.629 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.629 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.629 00:20:02.629 Run status group 0 (all jobs): 00:20:02.629 READ: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=512MiB (536MB), run=24884-25059msec 00:20:02.629 WRITE: bw=20.5MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=512MiB (537MB), run=24748-25031msec 00:20:05.165 ----------------------------------------------------- 00:20:05.165 Suppressions used: 00:20:05.165 count bytes template 00:20:05.165 2 10 /usr/src/fio/parse.c 00:20:05.165 3 288 /usr/src/fio/iolog.c 00:20:05.165 1 8 libtcmalloc_minimal.so 00:20:05.165 1 904 libcrypto.so 00:20:05.165 ----------------------------------------------------- 00:20:05.165 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@735 -- # xtrace_disable 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # shift 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # grep libasan 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # break 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.165 08:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:05.165 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:05.165 fio-3.35 00:20:05.165 Starting 1 thread 00:20:23.267 00:20:23.267 test: (groupid=0, jobs=1): err= 0: pid=74514: Wed Nov 20 08:33:08 2024 00:20:23.267 read: IOPS=7294, BW=28.5MiB/s (29.9MB/s)(255MiB/8938msec) 00:20:23.267 slat (nsec): min=3251, max=74251, avg=7242.35, stdev=3866.70 00:20:23.267 clat (usec): min=678, max=34849, avg=17535.33, stdev=1586.29 00:20:23.267 lat (usec): min=683, max=34856, avg=17542.57, stdev=1586.97 00:20:23.267 clat percentiles (usec): 00:20:23.267 | 1.00th=[15664], 5.00th=[16057], 10.00th=[16450], 20.00th=[16909], 00:20:23.267 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:20:23.267 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[20317], 00:20:23.267 | 99.00th=[23987], 99.50th=[25560], 99.90th=[31589], 99.95th=[32375], 00:20:23.267 | 99.99th=[34341] 00:20:23.267 write: IOPS=11.2k, BW=43.6MiB/s (45.8MB/s)(256MiB/5865msec); 0 zone resets 00:20:23.267 slat (usec): min=4, max=1603, avg= 8.99, stdev=11.77 00:20:23.267 clat (usec): min=654, max=65494, avg=11400.16, stdev=14310.74 00:20:23.267 lat (usec): min=663, max=65501, avg=11409.15, stdev=14310.90 00:20:23.267 clat percentiles (usec): 00:20:23.267 | 1.00th=[ 1020], 5.00th=[ 1254], 10.00th=[ 1434], 20.00th=[ 1680], 00:20:23.267 | 30.00th=[ 1942], 40.00th=[ 2835], 50.00th=[ 6980], 60.00th=[ 8160], 00:20:23.267 | 70.00th=[10028], 80.00th=[13173], 90.00th=[39060], 95.00th=[43779], 00:20:23.268 | 99.00th=[54789], 99.50th=[57410], 99.90th=[62129], 99.95th=[63177], 00:20:23.268 | 99.99th=[64750] 00:20:23.268 bw ( KiB/s): min=29168, max=61048, per=97.74%, avg=43684.92, stdev=9939.80, samples=12 00:20:23.268 iops : min= 7292, max=15262, avg=10921.17, stdev=2485.02, samples=12 00:20:23.268 lat (usec) : 750=0.01%, 1000=0.39% 00:20:23.268 lat (msec) : 2=15.29%, 4=5.27%, 10=14.27%, 20=53.83%, 50=9.70% 00:20:23.268 lat (msec) : 100=1.25% 00:20:23.268 cpu : usr=98.87%, sys=0.32%, ctx=21, majf=0, minf=5565 00:20:23.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:23.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.268 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.268 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.268 00:20:23.268 Run status group 0 (all jobs): 00:20:23.268 READ: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=255MiB (267MB), run=8938-8938msec 00:20:23.268 WRITE: bw=43.6MiB/s (45.8MB/s), 43.6MiB/s-43.6MiB/s (45.8MB/s-45.8MB/s), io=256MiB (268MB), run=5865-5865msec 00:20:23.527 ----------------------------------------------------- 00:20:23.527 Suppressions used: 00:20:23.527 count bytes template 00:20:23.527 1 5 /usr/src/fio/parse.c 00:20:23.527 2 192 /usr/src/fio/iolog.c 00:20:23.527 1 8 libtcmalloc_minimal.so 00:20:23.527 1 904 libcrypto.so 00:20:23.527 ----------------------------------------------------- 00:20:23.527 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- common/autotest_common.sh@735 -- # xtrace_disable 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:23.527 Remove shared memory files 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:23.527 08:33:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57243 /dev/shm/spdk_tgt_trace.pid72708 00:20:23.527 08:33:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:23.527 08:33:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:23.527 ************************************ 00:20:23.527 END TEST ftl_fio_basic 00:20:23.527 ************************************ 00:20:23.527 00:20:23.527 real 1m12.695s 00:20:23.527 user 2m36.892s 00:20:23.527 sys 0m4.166s 00:20:23.527 08:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:23.527 08:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 08:33:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:23.527 08:33:11 ftl -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:20:23.527 08:33:11 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:23.527 08:33:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:23.786 ************************************ 00:20:23.787 START TEST ftl_bdevperf 00:20:23.787 ************************************ 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:23.787 * Looking for test storage... 00:20:23.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1638 -- # lcov --version 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.787 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:20:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.046 --rc genhtml_branch_coverage=1 00:20:24.046 --rc genhtml_function_coverage=1 00:20:24.046 --rc genhtml_legend=1 00:20:24.046 --rc geninfo_all_blocks=1 00:20:24.046 --rc geninfo_unexecuted_blocks=1 00:20:24.046 00:20:24.046 ' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:20:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.046 --rc genhtml_branch_coverage=1 00:20:24.046 --rc genhtml_function_coverage=1 00:20:24.046 --rc genhtml_legend=1 00:20:24.046 --rc geninfo_all_blocks=1 00:20:24.046 --rc geninfo_unexecuted_blocks=1 00:20:24.046 00:20:24.046 ' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:20:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.046 --rc genhtml_branch_coverage=1 00:20:24.046 --rc genhtml_function_coverage=1 00:20:24.046 --rc genhtml_legend=1 00:20:24.046 --rc geninfo_all_blocks=1 00:20:24.046 --rc geninfo_unexecuted_blocks=1 00:20:24.046 00:20:24.046 ' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:20:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.046 --rc genhtml_branch_coverage=1 00:20:24.046 --rc genhtml_function_coverage=1 00:20:24.046 --rc genhtml_legend=1 00:20:24.046 --rc geninfo_all_blocks=1 00:20:24.046 --rc geninfo_unexecuted_blocks=1 00:20:24.046 00:20:24.046 ' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:24.046 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=74780 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 74780 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # '[' -z 74780 ']' 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@843 -- # local max_retries=100 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@847 -- # xtrace_disable 00:20:24.047 08:33:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:24.047 [2024-11-20 08:33:11.496925] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:20:24.047 [2024-11-20 08:33:11.497387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74780 ] 00:20:24.306 [2024-11-20 08:33:11.674074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.306 [2024-11-20 08:33:11.785239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@871 -- # return 0 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:24.875 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1370 -- # local bdev_name=nvme0n1 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1372 -- # local bs 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1373 -- # local nb 00:20:25.135 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:25.396 { 00:20:25.396 "name": "nvme0n1", 00:20:25.396 "aliases": [ 00:20:25.396 "39b846e2-0edc-4ae1-9e98-3b5723ee6f01" 00:20:25.396 ], 00:20:25.396 "product_name": "NVMe disk", 00:20:25.396 "block_size": 4096, 00:20:25.396 "num_blocks": 1310720, 00:20:25.396 "uuid": "39b846e2-0edc-4ae1-9e98-3b5723ee6f01", 00:20:25.396 "numa_id": -1, 00:20:25.396 "assigned_rate_limits": { 00:20:25.396 "rw_ios_per_sec": 0, 00:20:25.396 "rw_mbytes_per_sec": 0, 00:20:25.396 "r_mbytes_per_sec": 0, 00:20:25.396 "w_mbytes_per_sec": 0 00:20:25.396 }, 00:20:25.396 "claimed": true, 00:20:25.396 "claim_type": "read_many_write_one", 00:20:25.396 "zoned": false, 00:20:25.396 "supported_io_types": { 00:20:25.396 "read": true, 00:20:25.396 "write": true, 00:20:25.396 "unmap": true, 00:20:25.396 "flush": true, 00:20:25.396 "reset": true, 00:20:25.396 "nvme_admin": true, 00:20:25.396 "nvme_io": true, 00:20:25.396 "nvme_io_md": false, 00:20:25.396 "write_zeroes": true, 00:20:25.396 "zcopy": false, 00:20:25.396 "get_zone_info": false, 00:20:25.396 "zone_management": false, 00:20:25.396 "zone_append": false, 00:20:25.396 "compare": true, 00:20:25.396 "compare_and_write": false, 00:20:25.396 "abort": true, 00:20:25.396 "seek_hole": false, 00:20:25.396 "seek_data": false, 00:20:25.396 "copy": true, 00:20:25.396 "nvme_iov_md": false 00:20:25.396 }, 00:20:25.396 "driver_specific": { 00:20:25.396 "nvme": [ 00:20:25.396 { 00:20:25.396 "pci_address": "0000:00:11.0", 00:20:25.396 "trid": { 00:20:25.396 "trtype": "PCIe", 00:20:25.396 "traddr": "0000:00:11.0" 00:20:25.396 }, 00:20:25.396 "ctrlr_data": { 00:20:25.396 "cntlid": 0, 00:20:25.396 "vendor_id": "0x1b36", 00:20:25.396 "model_number": "QEMU NVMe Ctrl", 00:20:25.396 "serial_number": "12341", 00:20:25.396 "firmware_revision": "8.0.0", 00:20:25.396 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:25.396 "oacs": { 00:20:25.396 "security": 0, 00:20:25.396 "format": 1, 00:20:25.396 "firmware": 0, 00:20:25.396 "ns_manage": 1 00:20:25.396 }, 00:20:25.396 "multi_ctrlr": false, 00:20:25.396 "ana_reporting": false 00:20:25.396 }, 00:20:25.396 "vs": { 00:20:25.396 "nvme_version": "1.4" 00:20:25.396 }, 00:20:25.396 "ns_data": { 00:20:25.396 "id": 1, 00:20:25.396 "can_share": false 00:20:25.396 } 00:20:25.396 } 00:20:25.396 ], 00:20:25.396 "mp_policy": "active_passive" 00:20:25.396 } 00:20:25.396 } 00:20:25.396 ]' 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # bs=4096 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # nb=1310720 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # echo 5120 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:25.396 08:33:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:25.655 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a1b5b481-f83c-43e9-8f00-e20525eb2655 00:20:25.655 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:25.655 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1b5b481-f83c-43e9-8f00-e20525eb2655 00:20:26.224 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:26.224 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6463ebc0-405c-41f3-9a0f-97eee14660d5 00:20:26.224 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6463ebc0-405c-41f3-9a0f-97eee14660d5 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1370 -- # local bdev_name=5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1372 -- # local bs 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1373 -- # local nb 00:20:26.483 08:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:26.744 { 00:20:26.744 "name": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:26.744 "aliases": [ 00:20:26.744 "lvs/nvme0n1p0" 00:20:26.744 ], 00:20:26.744 "product_name": "Logical Volume", 00:20:26.744 "block_size": 4096, 00:20:26.744 "num_blocks": 26476544, 00:20:26.744 "uuid": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:26.744 "assigned_rate_limits": { 00:20:26.744 "rw_ios_per_sec": 0, 00:20:26.744 "rw_mbytes_per_sec": 0, 00:20:26.744 "r_mbytes_per_sec": 0, 00:20:26.744 "w_mbytes_per_sec": 0 00:20:26.744 }, 00:20:26.744 "claimed": false, 00:20:26.744 "zoned": false, 00:20:26.744 "supported_io_types": { 00:20:26.744 "read": true, 00:20:26.744 "write": true, 00:20:26.744 "unmap": true, 00:20:26.744 "flush": false, 00:20:26.744 "reset": true, 00:20:26.744 "nvme_admin": false, 00:20:26.744 "nvme_io": false, 00:20:26.744 "nvme_io_md": false, 00:20:26.744 "write_zeroes": true, 00:20:26.744 "zcopy": false, 00:20:26.744 "get_zone_info": false, 00:20:26.744 "zone_management": false, 00:20:26.744 "zone_append": false, 00:20:26.744 "compare": false, 00:20:26.744 "compare_and_write": false, 00:20:26.744 "abort": false, 00:20:26.744 "seek_hole": true, 00:20:26.744 "seek_data": true, 00:20:26.744 "copy": false, 00:20:26.744 "nvme_iov_md": false 00:20:26.744 }, 00:20:26.744 "driver_specific": { 00:20:26.744 "lvol": { 00:20:26.744 "lvol_store_uuid": "6463ebc0-405c-41f3-9a0f-97eee14660d5", 00:20:26.744 "base_bdev": "nvme0n1", 00:20:26.744 "thin_provision": true, 00:20:26.744 "num_allocated_clusters": 0, 00:20:26.744 "snapshot": false, 00:20:26.744 "clone": false, 00:20:26.744 "esnap_clone": false 00:20:26.744 } 00:20:26.744 } 00:20:26.744 } 00:20:26.744 ]' 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # bs=4096 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # echo 103424 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:26.744 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1370 -- # local bdev_name=5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1372 -- # local bs 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1373 -- # local nb 00:20:27.004 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:27.264 { 00:20:27.264 "name": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:27.264 "aliases": [ 00:20:27.264 "lvs/nvme0n1p0" 00:20:27.264 ], 00:20:27.264 "product_name": "Logical Volume", 00:20:27.264 "block_size": 4096, 00:20:27.264 "num_blocks": 26476544, 00:20:27.264 "uuid": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:27.264 "assigned_rate_limits": { 00:20:27.264 "rw_ios_per_sec": 0, 00:20:27.264 "rw_mbytes_per_sec": 0, 00:20:27.264 "r_mbytes_per_sec": 0, 00:20:27.264 "w_mbytes_per_sec": 0 00:20:27.264 }, 00:20:27.264 "claimed": false, 00:20:27.264 "zoned": false, 00:20:27.264 "supported_io_types": { 00:20:27.264 "read": true, 00:20:27.264 "write": true, 00:20:27.264 "unmap": true, 00:20:27.264 "flush": false, 00:20:27.264 "reset": true, 00:20:27.264 "nvme_admin": false, 00:20:27.264 "nvme_io": false, 00:20:27.264 "nvme_io_md": false, 00:20:27.264 "write_zeroes": true, 00:20:27.264 "zcopy": false, 00:20:27.264 "get_zone_info": false, 00:20:27.264 "zone_management": false, 00:20:27.264 "zone_append": false, 00:20:27.264 "compare": false, 00:20:27.264 "compare_and_write": false, 00:20:27.264 "abort": false, 00:20:27.264 "seek_hole": true, 00:20:27.264 "seek_data": true, 00:20:27.264 "copy": false, 00:20:27.264 "nvme_iov_md": false 00:20:27.264 }, 00:20:27.264 "driver_specific": { 00:20:27.264 "lvol": { 00:20:27.264 "lvol_store_uuid": "6463ebc0-405c-41f3-9a0f-97eee14660d5", 00:20:27.264 "base_bdev": "nvme0n1", 00:20:27.264 "thin_provision": true, 00:20:27.264 "num_allocated_clusters": 0, 00:20:27.264 "snapshot": false, 00:20:27.264 "clone": false, 00:20:27.264 "esnap_clone": false 00:20:27.264 } 00:20:27.264 } 00:20:27.264 } 00:20:27.264 ]' 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # bs=4096 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # echo 103424 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:27.264 08:33:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1370 -- # local bdev_name=5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1372 -- # local bs 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1373 -- # local nb 00:20:27.530 08:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e284eaa-a0f2-489b-9b45-bc5aecaac213 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:27.806 { 00:20:27.806 "name": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:27.806 "aliases": [ 00:20:27.806 "lvs/nvme0n1p0" 00:20:27.806 ], 00:20:27.806 "product_name": "Logical Volume", 00:20:27.806 "block_size": 4096, 00:20:27.806 "num_blocks": 26476544, 00:20:27.806 "uuid": "5e284eaa-a0f2-489b-9b45-bc5aecaac213", 00:20:27.806 "assigned_rate_limits": { 00:20:27.806 "rw_ios_per_sec": 0, 00:20:27.806 "rw_mbytes_per_sec": 0, 00:20:27.806 "r_mbytes_per_sec": 0, 00:20:27.806 "w_mbytes_per_sec": 0 00:20:27.806 }, 00:20:27.806 "claimed": false, 00:20:27.806 "zoned": false, 00:20:27.806 "supported_io_types": { 00:20:27.806 "read": true, 00:20:27.806 "write": true, 00:20:27.806 "unmap": true, 00:20:27.806 "flush": false, 00:20:27.806 "reset": true, 00:20:27.806 "nvme_admin": false, 00:20:27.806 "nvme_io": false, 00:20:27.806 "nvme_io_md": false, 00:20:27.806 "write_zeroes": true, 00:20:27.806 "zcopy": false, 00:20:27.806 "get_zone_info": false, 00:20:27.806 "zone_management": false, 00:20:27.806 "zone_append": false, 00:20:27.806 "compare": false, 00:20:27.806 "compare_and_write": false, 00:20:27.806 "abort": false, 00:20:27.806 "seek_hole": true, 00:20:27.806 "seek_data": true, 00:20:27.806 "copy": false, 00:20:27.806 "nvme_iov_md": false 00:20:27.806 }, 00:20:27.806 "driver_specific": { 00:20:27.806 "lvol": { 00:20:27.806 "lvol_store_uuid": "6463ebc0-405c-41f3-9a0f-97eee14660d5", 00:20:27.806 "base_bdev": "nvme0n1", 00:20:27.806 "thin_provision": true, 00:20:27.806 "num_allocated_clusters": 0, 00:20:27.806 "snapshot": false, 00:20:27.806 "clone": false, 00:20:27.806 "esnap_clone": false 00:20:27.806 } 00:20:27.806 } 00:20:27.806 } 00:20:27.806 ]' 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # bs=4096 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # echo 103424 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:27.806 08:33:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e284eaa-a0f2-489b-9b45-bc5aecaac213 -c nvc0n1p0 --l2p_dram_limit 20 00:20:28.082 [2024-11-20 08:33:15.464313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-11-20 08:33:15.464386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.082 [2024-11-20 08:33:15.464406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.082 [2024-11-20 08:33:15.464422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-11-20 08:33:15.464490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-11-20 08:33:15.464511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.082 [2024-11-20 08:33:15.464524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:28.082 [2024-11-20 08:33:15.464540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-11-20 08:33:15.464563] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.082 [2024-11-20 08:33:15.465643] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.082 [2024-11-20 08:33:15.465677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-11-20 08:33:15.465694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.082 [2024-11-20 08:33:15.465707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:20:28.082 [2024-11-20 08:33:15.465723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.465871] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b3cb80c3-b17e-4cf9-ac5e-957ca14b3571 00:20:28.083 [2024-11-20 08:33:15.467362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.467534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:28.083 [2024-11-20 08:33:15.467562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:28.083 [2024-11-20 08:33:15.467579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.475374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.475406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.083 [2024-11-20 08:33:15.475423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.754 ms 00:20:28.083 [2024-11-20 08:33:15.475452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.475564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.475580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.083 [2024-11-20 08:33:15.475601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:28.083 [2024-11-20 08:33:15.475613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.475688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.475702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.083 [2024-11-20 08:33:15.475718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:28.083 [2024-11-20 08:33:15.475731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.475762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.083 [2024-11-20 08:33:15.481118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.481155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.083 [2024-11-20 08:33:15.481170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.377 ms 00:20:28.083 [2024-11-20 08:33:15.481187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.481225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.481241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.083 [2024-11-20 08:33:15.481254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:28.083 [2024-11-20 08:33:15.481269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.481306] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:28.083 [2024-11-20 08:33:15.481438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.083 [2024-11-20 08:33:15.481453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.083 [2024-11-20 08:33:15.481472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:28.083 [2024-11-20 08:33:15.481488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.083 [2024-11-20 08:33:15.481505] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.083 [2024-11-20 08:33:15.481518] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:28.083 [2024-11-20 08:33:15.481532] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.083 [2024-11-20 08:33:15.481544] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.083 [2024-11-20 08:33:15.481558] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.083 [2024-11-20 08:33:15.481570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.481590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.083 [2024-11-20 08:33:15.481602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:28.083 [2024-11-20 08:33:15.481617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.481687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-11-20 08:33:15.481705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.083 [2024-11-20 08:33:15.481717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:28.083 [2024-11-20 08:33:15.481734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-11-20 08:33:15.481814] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.083 [2024-11-20 08:33:15.481831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.083 [2024-11-20 08:33:15.481846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.083 [2024-11-20 08:33:15.481861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.481873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.083 [2024-11-20 08:33:15.481887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.481899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:28.083 [2024-11-20 08:33:15.481913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.083 [2024-11-20 08:33:15.481924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:28.083 [2024-11-20 08:33:15.481938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.083 [2024-11-20 08:33:15.481966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.083 [2024-11-20 08:33:15.481982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:28.083 [2024-11-20 08:33:15.481994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.083 [2024-11-20 08:33:15.482041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.083 [2024-11-20 08:33:15.482054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:28.083 [2024-11-20 08:33:15.482072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.083 [2024-11-20 08:33:15.482098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.083 [2024-11-20 08:33:15.482138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.083 [2024-11-20 08:33:15.482199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.083 [2024-11-20 08:33:15.482241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.083 [2024-11-20 08:33:15.482288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.083 [2024-11-20 08:33:15.482329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.083 [2024-11-20 08:33:15.482355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.083 [2024-11-20 08:33:15.482370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:28.083 [2024-11-20 08:33:15.482381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.083 [2024-11-20 08:33:15.482396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.083 [2024-11-20 08:33:15.482408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:28.083 [2024-11-20 08:33:15.482422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.083 [2024-11-20 08:33:15.482447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:28.083 [2024-11-20 08:33:15.482458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.083 [2024-11-20 08:33:15.482485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.083 [2024-11-20 08:33:15.482500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.083 [2024-11-20 08:33:15.482513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.083 [2024-11-20 08:33:15.482533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.083 [2024-11-20 08:33:15.482545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.084 [2024-11-20 08:33:15.482560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.084 [2024-11-20 08:33:15.482572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.084 [2024-11-20 08:33:15.482591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.084 [2024-11-20 08:33:15.482603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.084 [2024-11-20 08:33:15.482628] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.084 [2024-11-20 08:33:15.482643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:28.084 [2024-11-20 08:33:15.482676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:28.084 [2024-11-20 08:33:15.482691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:28.084 [2024-11-20 08:33:15.482704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:28.084 [2024-11-20 08:33:15.482719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:28.084 [2024-11-20 08:33:15.482732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:28.084 [2024-11-20 08:33:15.482747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:28.084 [2024-11-20 08:33:15.482759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:28.084 [2024-11-20 08:33:15.482777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:28.084 [2024-11-20 08:33:15.482790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:28.084 [2024-11-20 08:33:15.482861] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.084 [2024-11-20 08:33:15.482874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.084 [2024-11-20 08:33:15.482905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.084 [2024-11-20 08:33:15.482921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.084 [2024-11-20 08:33:15.482935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.084 [2024-11-20 08:33:15.482952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.084 [2024-11-20 08:33:15.482967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.084 [2024-11-20 08:33:15.482982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:20:28.084 [2024-11-20 08:33:15.483284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.084 [2024-11-20 08:33:15.483394] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:28.084 [2024-11-20 08:33:15.483457] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:32.283 [2024-11-20 08:33:19.500542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.283 [2024-11-20 08:33:19.500628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:32.283 [2024-11-20 08:33:19.500656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4023.665 ms 00:20:32.283 [2024-11-20 08:33:19.500667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.283 [2024-11-20 08:33:19.546137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.546391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.284 [2024-11-20 08:33:19.546425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.241 ms 00:20:32.284 [2024-11-20 08:33:19.546437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.546573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.546587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.284 [2024-11-20 08:33:19.546607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:32.284 [2024-11-20 08:33:19.546617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.625379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.625426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.284 [2024-11-20 08:33:19.625447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.848 ms 00:20:32.284 [2024-11-20 08:33:19.625458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.625502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.625517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.284 [2024-11-20 08:33:19.625531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.284 [2024-11-20 08:33:19.625542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.626409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.626433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.284 [2024-11-20 08:33:19.626448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:20:32.284 [2024-11-20 08:33:19.626459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.626576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.626589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.284 [2024-11-20 08:33:19.626607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:32.284 [2024-11-20 08:33:19.626617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.649229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.649264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.284 [2024-11-20 08:33:19.649281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.626 ms 00:20:32.284 [2024-11-20 08:33:19.649291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.663276] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:32.284 [2024-11-20 08:33:19.672144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.672401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.284 [2024-11-20 08:33:19.672423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.808 ms 00:20:32.284 [2024-11-20 08:33:19.672438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.771434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.771483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:32.284 [2024-11-20 08:33:19.771499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.126 ms 00:20:32.284 [2024-11-20 08:33:19.771512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.771693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.771715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.284 [2024-11-20 08:33:19.771726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:20:32.284 [2024-11-20 08:33:19.771739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.805654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.805882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:32.284 [2024-11-20 08:33:19.805905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.919 ms 00:20:32.284 [2024-11-20 08:33:19.805920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.840295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.840337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:32.284 [2024-11-20 08:33:19.840353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.390 ms 00:20:32.284 [2024-11-20 08:33:19.840367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.284 [2024-11-20 08:33:19.841115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.284 [2024-11-20 08:33:19.841141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.284 [2024-11-20 08:33:19.841154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:32.284 [2024-11-20 08:33:19.841168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.543 [2024-11-20 08:33:19.944727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.543 [2024-11-20 08:33:19.944904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:32.543 [2024-11-20 08:33:19.944926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.674 ms 00:20:32.543 [2024-11-20 08:33:19.944942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.543 [2024-11-20 08:33:19.980733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.543 [2024-11-20 08:33:19.980774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:32.543 [2024-11-20 08:33:19.980789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.714 ms 00:20:32.543 [2024-11-20 08:33:19.980806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.543 [2024-11-20 08:33:20.016113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.543 [2024-11-20 08:33:20.016159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:32.543 [2024-11-20 08:33:20.016173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.327 ms 00:20:32.543 [2024-11-20 08:33:20.016187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.543 [2024-11-20 08:33:20.052863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.543 [2024-11-20 08:33:20.052905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.543 [2024-11-20 08:33:20.052919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.696 ms 00:20:32.543 [2024-11-20 08:33:20.052933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.543 [2024-11-20 08:33:20.052978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.543 [2024-11-20 08:33:20.053010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.544 [2024-11-20 08:33:20.053022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:32.544 [2024-11-20 08:33:20.053036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.544 [2024-11-20 08:33:20.053172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.544 [2024-11-20 08:33:20.053189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.544 [2024-11-20 08:33:20.053200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:32.544 [2024-11-20 08:33:20.053213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.544 [2024-11-20 08:33:20.054542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4597.121 ms, result 0 00:20:32.544 { 00:20:32.544 "name": "ftl0", 00:20:32.544 "uuid": "b3cb80c3-b17e-4cf9-ac5e-957ca14b3571" 00:20:32.544 } 00:20:32.544 08:33:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:32.544 08:33:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:32.544 08:33:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:32.803 08:33:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:32.803 [2024-11-20 08:33:20.358257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:33.062 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:33.062 Zero copy mechanism will not be used. 00:20:33.062 Running I/O for 4 seconds... 00:20:34.937 1339.00 IOPS, 88.92 MiB/s [2024-11-20T08:33:23.435Z] 1357.00 IOPS, 90.11 MiB/s [2024-11-20T08:33:24.372Z] 1373.33 IOPS, 91.20 MiB/s [2024-11-20T08:33:24.372Z] 1404.50 IOPS, 93.27 MiB/s 00:20:36.811 Latency(us) 00:20:36.811 [2024-11-20T08:33:24.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.811 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:36.811 ftl0 : 4.00 1404.12 93.24 0.00 0.00 747.13 276.36 18634.33 00:20:36.811 [2024-11-20T08:33:24.372Z] =================================================================================================================== 00:20:36.811 [2024-11-20T08:33:24.372Z] Total : 1404.12 93.24 0.00 0.00 747.13 276.36 18634.33 00:20:36.811 [2024-11-20 08:33:24.364146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:37.071 { 00:20:37.071 "results": [ 00:20:37.071 { 00:20:37.071 "job": "ftl0", 00:20:37.071 "core_mask": "0x1", 00:20:37.071 "workload": "randwrite", 00:20:37.071 "status": "finished", 00:20:37.071 "queue_depth": 1, 00:20:37.071 "io_size": 69632, 00:20:37.071 "runtime": 4.001786, 00:20:37.071 "iops": 1404.1230590541322, 00:20:37.071 "mibps": 93.24254689031346, 00:20:37.071 "io_failed": 0, 00:20:37.071 "io_timeout": 0, 00:20:37.071 "avg_latency_us": 747.1252004279798, 00:20:37.071 "min_latency_us": 276.3566265060241, 00:20:37.071 "max_latency_us": 18634.332530120482 00:20:37.071 } 00:20:37.071 ], 00:20:37.071 "core_count": 1 00:20:37.071 } 00:20:37.071 08:33:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:37.071 [2024-11-20 08:33:24.501562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:37.071 Running I/O for 4 seconds... 00:20:39.384 9933.00 IOPS, 38.80 MiB/s [2024-11-20T08:33:27.514Z] 10662.50 IOPS, 41.65 MiB/s [2024-11-20T08:33:28.893Z] 10922.00 IOPS, 42.66 MiB/s [2024-11-20T08:33:28.893Z] 11005.25 IOPS, 42.99 MiB/s 00:20:41.332 Latency(us) 00:20:41.332 [2024-11-20T08:33:28.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.332 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:41.332 ftl0 : 4.01 10996.65 42.96 0.00 0.00 11617.96 227.01 30320.27 00:20:41.332 [2024-11-20T08:33:28.893Z] =================================================================================================================== 00:20:41.332 [2024-11-20T08:33:28.893Z] Total : 10996.65 42.96 0.00 0.00 11617.96 0.00 30320.27 00:20:41.332 [2024-11-20 08:33:28.519097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:41.332 { 00:20:41.332 "results": [ 00:20:41.332 { 00:20:41.332 "job": "ftl0", 00:20:41.332 "core_mask": "0x1", 00:20:41.332 "workload": "randwrite", 00:20:41.332 "status": "finished", 00:20:41.332 "queue_depth": 128, 00:20:41.332 "io_size": 4096, 00:20:41.332 "runtime": 4.014404, 00:20:41.332 "iops": 10996.651059534615, 00:20:41.332 "mibps": 42.95566820130709, 00:20:41.332 "io_failed": 0, 00:20:41.332 "io_timeout": 0, 00:20:41.332 "avg_latency_us": 11617.962769606003, 00:20:41.332 "min_latency_us": 227.00722891566264, 00:20:41.332 "max_latency_us": 30320.269879518073 00:20:41.332 } 00:20:41.332 ], 00:20:41.332 "core_count": 1 00:20:41.332 } 00:20:41.332 08:33:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:41.332 [2024-11-20 08:33:28.658576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:41.332 Running I/O for 4 seconds... 00:20:43.209 8980.00 IOPS, 35.08 MiB/s [2024-11-20T08:33:31.706Z] 8873.50 IOPS, 34.66 MiB/s [2024-11-20T08:33:32.674Z] 8639.00 IOPS, 33.75 MiB/s [2024-11-20T08:33:32.674Z] 8538.75 IOPS, 33.35 MiB/s 00:20:45.113 Latency(us) 00:20:45.113 [2024-11-20T08:33:32.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.113 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:45.113 Verification LBA range: start 0x0 length 0x1400000 00:20:45.113 ftl0 : 4.01 8551.36 33.40 0.00 0.00 14925.60 245.10 32846.96 00:20:45.113 [2024-11-20T08:33:32.674Z] =================================================================================================================== 00:20:45.113 [2024-11-20T08:33:32.674Z] Total : 8551.36 33.40 0.00 0.00 14925.60 0.00 32846.96 00:20:45.372 { 00:20:45.372 "results": [ 00:20:45.372 { 00:20:45.372 "job": "ftl0", 00:20:45.372 "core_mask": "0x1", 00:20:45.372 "workload": "verify", 00:20:45.372 "status": "finished", 00:20:45.372 "verify_range": { 00:20:45.372 "start": 0, 00:20:45.372 "length": 20971520 00:20:45.372 }, 00:20:45.372 "queue_depth": 128, 00:20:45.372 "io_size": 4096, 00:20:45.372 "runtime": 4.009071, 00:20:45.372 "iops": 8551.357658669553, 00:20:45.372 "mibps": 33.40374085417794, 00:20:45.372 "io_failed": 0, 00:20:45.372 "io_timeout": 0, 00:20:45.372 "avg_latency_us": 14925.596516497983, 00:20:45.372 "min_latency_us": 245.1020080321285, 00:20:45.372 "max_latency_us": 32846.95903614458 00:20:45.372 } 00:20:45.372 ], 00:20:45.372 "core_count": 1 00:20:45.372 } 00:20:45.372 [2024-11-20 08:33:32.684125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:45.372 08:33:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:45.372 [2024-11-20 08:33:32.887501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.372 [2024-11-20 08:33:32.887564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.372 [2024-11-20 08:33:32.887584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:45.372 [2024-11-20 08:33:32.887598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.372 [2024-11-20 08:33:32.887623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.372 [2024-11-20 08:33:32.891744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.372 [2024-11-20 08:33:32.891777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.372 [2024-11-20 08:33:32.891792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:20:45.372 [2024-11-20 08:33:32.891819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.372 [2024-11-20 08:33:32.893538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.372 [2024-11-20 08:33:32.893704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.372 [2024-11-20 08:33:32.893734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:20:45.372 [2024-11-20 08:33:32.893746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.065959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.066035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.631 [2024-11-20 08:33:33.066061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 172.451 ms 00:20:45.631 [2024-11-20 08:33:33.066072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.071149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.071188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:45.631 [2024-11-20 08:33:33.071204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:20:45.631 [2024-11-20 08:33:33.071214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.109584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.109790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.631 [2024-11-20 08:33:33.109820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.351 ms 00:20:45.631 [2024-11-20 08:33:33.109831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.132672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.132850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.631 [2024-11-20 08:33:33.132883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.828 ms 00:20:45.631 [2024-11-20 08:33:33.132895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.133084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.133100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.631 [2024-11-20 08:33:33.133118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:45.631 [2024-11-20 08:33:33.133129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.631 [2024-11-20 08:33:33.170352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.631 [2024-11-20 08:33:33.170397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:45.631 [2024-11-20 08:33:33.170415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.259 ms 00:20:45.631 [2024-11-20 08:33:33.170426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.946 [2024-11-20 08:33:33.206517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.946 [2024-11-20 08:33:33.206703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:45.946 [2024-11-20 08:33:33.206732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.102 ms 00:20:45.946 [2024-11-20 08:33:33.206743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.946 [2024-11-20 08:33:33.242596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.946 [2024-11-20 08:33:33.242645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.946 [2024-11-20 08:33:33.242664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.864 ms 00:20:45.946 [2024-11-20 08:33:33.242674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.946 [2024-11-20 08:33:33.279364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.946 [2024-11-20 08:33:33.279414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.946 [2024-11-20 08:33:33.279435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.621 ms 00:20:45.946 [2024-11-20 08:33:33.279445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.946 [2024-11-20 08:33:33.279491] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.946 [2024-11-20 08:33:33.279509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.279982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.280010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.280032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.280046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.946 [2024-11-20 08:33:33.280057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.947 [2024-11-20 08:33:33.280800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.947 [2024-11-20 08:33:33.280814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b3cb80c3-b17e-4cf9-ac5e-957ca14b3571 00:20:45.947 [2024-11-20 08:33:33.280825] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.947 [2024-11-20 08:33:33.280838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.947 [2024-11-20 08:33:33.280852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.947 [2024-11-20 08:33:33.280865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.947 [2024-11-20 08:33:33.280875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.947 [2024-11-20 08:33:33.280888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.947 [2024-11-20 08:33:33.280897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.947 [2024-11-20 08:33:33.280912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.947 [2024-11-20 08:33:33.280921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.947 [2024-11-20 08:33:33.280933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.947 [2024-11-20 08:33:33.280943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.947 [2024-11-20 08:33:33.280957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:20:45.947 [2024-11-20 08:33:33.280967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.301324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.947 [2024-11-20 08:33:33.301495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.947 [2024-11-20 08:33:33.301522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.324 ms 00:20:45.947 [2024-11-20 08:33:33.301533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.302102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.947 [2024-11-20 08:33:33.302120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.947 [2024-11-20 08:33:33.302134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:45.947 [2024-11-20 08:33:33.302144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.358816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.947 [2024-11-20 08:33:33.358875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.947 [2024-11-20 08:33:33.358896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.947 [2024-11-20 08:33:33.358906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.358979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.947 [2024-11-20 08:33:33.359015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.947 [2024-11-20 08:33:33.359029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.947 [2024-11-20 08:33:33.359039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.359171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.947 [2024-11-20 08:33:33.359189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.947 [2024-11-20 08:33:33.359202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.947 [2024-11-20 08:33:33.359213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.359233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.947 [2024-11-20 08:33:33.359244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.947 [2024-11-20 08:33:33.359257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.947 [2024-11-20 08:33:33.359267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.947 [2024-11-20 08:33:33.487151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.947 [2024-11-20 08:33:33.487211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.947 [2024-11-20 08:33:33.487232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.947 [2024-11-20 08:33:33.487243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.207 [2024-11-20 08:33:33.591443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.207 [2024-11-20 08:33:33.591454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.207 [2024-11-20 08:33:33.591608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.207 [2024-11-20 08:33:33.591618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.207 [2024-11-20 08:33:33.591701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.207 [2024-11-20 08:33:33.591711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.207 [2024-11-20 08:33:33.591846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.207 [2024-11-20 08:33:33.591857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.207 [2024-11-20 08:33:33.591922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.207 [2024-11-20 08:33:33.591932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.207 [2024-11-20 08:33:33.591973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.207 [2024-11-20 08:33:33.591985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.207 [2024-11-20 08:33:33.592027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.208 [2024-11-20 08:33:33.592040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.208 [2024-11-20 08:33:33.592087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.208 [2024-11-20 08:33:33.592108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.208 [2024-11-20 08:33:33.592122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.208 [2024-11-20 08:33:33.592132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.208 [2024-11-20 08:33:33.592262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 705.858 ms, result 0 00:20:46.208 true 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 74780 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' -z 74780 ']' 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@961 -- # kill -0 74780 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # uname 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 74780 00:20:46.208 killing process with pid 74780 00:20:46.208 Received shutdown signal, test time was about 4.000000 seconds 00:20:46.208 00:20:46.208 Latency(us) 00:20:46.208 [2024-11-20T08:33:33.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.208 [2024-11-20T08:33:33.769Z] =================================================================================================================== 00:20:46.208 [2024-11-20T08:33:33.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@975 -- # echo 'killing process with pid 74780' 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # kill 74780 00:20:46.208 08:33:33 ftl.ftl_bdevperf -- common/autotest_common.sh@981 -- # wait 74780 00:20:47.585 Remove shared memory files 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:47.585 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:47.586 08:33:34 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:47.586 ************************************ 00:20:47.586 END TEST ftl_bdevperf 00:20:47.586 ************************************ 00:20:47.586 00:20:47.586 real 0m23.896s 00:20:47.586 user 0m26.518s 00:20:47.586 sys 0m1.283s 00:20:47.586 08:33:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:47.586 08:33:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:47.586 08:33:35 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:47.586 08:33:35 ftl -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:20:47.586 08:33:35 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:47.586 08:33:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:47.586 ************************************ 00:20:47.586 START TEST ftl_trim 00:20:47.586 ************************************ 00:20:47.586 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:47.845 * Looking for test storage... 00:20:47.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.845 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:20:47.845 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:20:47.845 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1638 -- # lcov --version 00:20:47.845 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:47.845 08:33:35 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.846 08:33:35 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:20:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.846 --rc genhtml_branch_coverage=1 00:20:47.846 --rc genhtml_function_coverage=1 00:20:47.846 --rc genhtml_legend=1 00:20:47.846 --rc geninfo_all_blocks=1 00:20:47.846 --rc geninfo_unexecuted_blocks=1 00:20:47.846 00:20:47.846 ' 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:20:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.846 --rc genhtml_branch_coverage=1 00:20:47.846 --rc genhtml_function_coverage=1 00:20:47.846 --rc genhtml_legend=1 00:20:47.846 --rc geninfo_all_blocks=1 00:20:47.846 --rc geninfo_unexecuted_blocks=1 00:20:47.846 00:20:47.846 ' 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:20:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.846 --rc genhtml_branch_coverage=1 00:20:47.846 --rc genhtml_function_coverage=1 00:20:47.846 --rc genhtml_legend=1 00:20:47.846 --rc geninfo_all_blocks=1 00:20:47.846 --rc geninfo_unexecuted_blocks=1 00:20:47.846 00:20:47.846 ' 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:20:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.846 --rc genhtml_branch_coverage=1 00:20:47.846 --rc genhtml_function_coverage=1 00:20:47.846 --rc genhtml_legend=1 00:20:47.846 --rc geninfo_all_blocks=1 00:20:47.846 --rc geninfo_unexecuted_blocks=1 00:20:47.846 00:20:47.846 ' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75148 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:47.846 08:33:35 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75148 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@838 -- # '[' -z 75148 ']' 00:20:47.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@843 -- # local max_retries=100 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@847 -- # xtrace_disable 00:20:47.846 08:33:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:48.106 [2024-11-20 08:33:35.508011] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:20:48.106 [2024-11-20 08:33:35.508761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75148 ] 00:20:48.366 [2024-11-20 08:33:35.693383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.366 [2024-11-20 08:33:35.815811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.366 [2024-11-20 08:33:35.815952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.366 [2024-11-20 08:33:35.816021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.305 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:20:49.305 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@871 -- # return 0 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:49.305 08:33:36 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:49.564 08:33:36 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:49.564 08:33:36 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:49.564 08:33:36 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:49.564 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@1370 -- # local bdev_name=nvme0n1 00:20:49.564 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:49.564 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@1372 -- # local bs 00:20:49.564 08:33:36 ftl.ftl_trim -- common/autotest_common.sh@1373 -- # local nb 00:20:49.564 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:49.822 { 00:20:49.822 "name": "nvme0n1", 00:20:49.822 "aliases": [ 00:20:49.822 "2026655f-296b-47fd-9461-e4c736731373" 00:20:49.822 ], 00:20:49.822 "product_name": "NVMe disk", 00:20:49.822 "block_size": 4096, 00:20:49.822 "num_blocks": 1310720, 00:20:49.822 "uuid": "2026655f-296b-47fd-9461-e4c736731373", 00:20:49.822 "numa_id": -1, 00:20:49.822 "assigned_rate_limits": { 00:20:49.822 "rw_ios_per_sec": 0, 00:20:49.822 "rw_mbytes_per_sec": 0, 00:20:49.822 "r_mbytes_per_sec": 0, 00:20:49.822 "w_mbytes_per_sec": 0 00:20:49.822 }, 00:20:49.822 "claimed": true, 00:20:49.822 "claim_type": "read_many_write_one", 00:20:49.822 "zoned": false, 00:20:49.822 "supported_io_types": { 00:20:49.822 "read": true, 00:20:49.822 "write": true, 00:20:49.822 "unmap": true, 00:20:49.822 "flush": true, 00:20:49.822 "reset": true, 00:20:49.822 "nvme_admin": true, 00:20:49.822 "nvme_io": true, 00:20:49.822 "nvme_io_md": false, 00:20:49.822 "write_zeroes": true, 00:20:49.822 "zcopy": false, 00:20:49.822 "get_zone_info": false, 00:20:49.822 "zone_management": false, 00:20:49.822 "zone_append": false, 00:20:49.822 "compare": true, 00:20:49.822 "compare_and_write": false, 00:20:49.822 "abort": true, 00:20:49.822 "seek_hole": false, 00:20:49.822 "seek_data": false, 00:20:49.822 "copy": true, 00:20:49.822 "nvme_iov_md": false 00:20:49.822 }, 00:20:49.822 "driver_specific": { 00:20:49.822 "nvme": [ 00:20:49.822 { 00:20:49.822 "pci_address": "0000:00:11.0", 00:20:49.822 "trid": { 00:20:49.822 "trtype": "PCIe", 00:20:49.822 "traddr": "0000:00:11.0" 00:20:49.822 }, 00:20:49.822 "ctrlr_data": { 00:20:49.822 "cntlid": 0, 00:20:49.822 "vendor_id": "0x1b36", 00:20:49.822 "model_number": "QEMU NVMe Ctrl", 00:20:49.822 "serial_number": "12341", 00:20:49.822 "firmware_revision": "8.0.0", 00:20:49.822 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:49.822 "oacs": { 00:20:49.822 "security": 0, 00:20:49.822 "format": 1, 00:20:49.822 "firmware": 0, 00:20:49.822 "ns_manage": 1 00:20:49.822 }, 00:20:49.822 "multi_ctrlr": false, 00:20:49.822 "ana_reporting": false 00:20:49.822 }, 00:20:49.822 "vs": { 00:20:49.822 "nvme_version": "1.4" 00:20:49.822 }, 00:20:49.822 "ns_data": { 00:20:49.822 "id": 1, 00:20:49.822 "can_share": false 00:20:49.822 } 00:20:49.822 } 00:20:49.822 ], 00:20:49.822 "mp_policy": "active_passive" 00:20:49.822 } 00:20:49.822 } 00:20:49.822 ]' 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # bs=4096 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # nb=1310720 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:20:49.822 08:33:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # echo 5120 00:20:49.822 08:33:37 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:49.822 08:33:37 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:49.822 08:33:37 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:49.822 08:33:37 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:49.822 08:33:37 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:50.080 08:33:37 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6463ebc0-405c-41f3-9a0f-97eee14660d5 00:20:50.080 08:33:37 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:50.080 08:33:37 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6463ebc0-405c-41f3-9a0f-97eee14660d5 00:20:50.338 08:33:37 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:50.596 08:33:37 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a34b27d6-bbb0-4c5b-bd15-2c519592befd 00:20:50.597 08:33:37 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a34b27d6-bbb0-4c5b-bd15-2c519592befd 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1370 -- # local bdev_name=bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1372 -- # local bs 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1373 -- # local nb 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:50.923 { 00:20:50.923 "name": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:50.923 "aliases": [ 00:20:50.923 "lvs/nvme0n1p0" 00:20:50.923 ], 00:20:50.923 "product_name": "Logical Volume", 00:20:50.923 "block_size": 4096, 00:20:50.923 "num_blocks": 26476544, 00:20:50.923 "uuid": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:50.923 "assigned_rate_limits": { 00:20:50.923 "rw_ios_per_sec": 0, 00:20:50.923 "rw_mbytes_per_sec": 0, 00:20:50.923 "r_mbytes_per_sec": 0, 00:20:50.923 "w_mbytes_per_sec": 0 00:20:50.923 }, 00:20:50.923 "claimed": false, 00:20:50.923 "zoned": false, 00:20:50.923 "supported_io_types": { 00:20:50.923 "read": true, 00:20:50.923 "write": true, 00:20:50.923 "unmap": true, 00:20:50.923 "flush": false, 00:20:50.923 "reset": true, 00:20:50.923 "nvme_admin": false, 00:20:50.923 "nvme_io": false, 00:20:50.923 "nvme_io_md": false, 00:20:50.923 "write_zeroes": true, 00:20:50.923 "zcopy": false, 00:20:50.923 "get_zone_info": false, 00:20:50.923 "zone_management": false, 00:20:50.923 "zone_append": false, 00:20:50.923 "compare": false, 00:20:50.923 "compare_and_write": false, 00:20:50.923 "abort": false, 00:20:50.923 "seek_hole": true, 00:20:50.923 "seek_data": true, 00:20:50.923 "copy": false, 00:20:50.923 "nvme_iov_md": false 00:20:50.923 }, 00:20:50.923 "driver_specific": { 00:20:50.923 "lvol": { 00:20:50.923 "lvol_store_uuid": "a34b27d6-bbb0-4c5b-bd15-2c519592befd", 00:20:50.923 "base_bdev": "nvme0n1", 00:20:50.923 "thin_provision": true, 00:20:50.923 "num_allocated_clusters": 0, 00:20:50.923 "snapshot": false, 00:20:50.923 "clone": false, 00:20:50.923 "esnap_clone": false 00:20:50.923 } 00:20:50.923 } 00:20:50.923 } 00:20:50.923 ]' 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # bs=4096 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:50.923 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # echo 103424 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:50.923 08:33:38 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:51.181 08:33:38 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:51.181 08:33:38 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:51.181 08:33:38 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:51.181 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1370 -- # local bdev_name=bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:51.181 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:51.181 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1372 -- # local bs 00:20:51.181 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1373 -- # local nb 00:20:51.441 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:51.441 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:51.441 { 00:20:51.441 "name": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:51.441 "aliases": [ 00:20:51.441 "lvs/nvme0n1p0" 00:20:51.441 ], 00:20:51.441 "product_name": "Logical Volume", 00:20:51.441 "block_size": 4096, 00:20:51.441 "num_blocks": 26476544, 00:20:51.441 "uuid": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:51.441 "assigned_rate_limits": { 00:20:51.441 "rw_ios_per_sec": 0, 00:20:51.441 "rw_mbytes_per_sec": 0, 00:20:51.441 "r_mbytes_per_sec": 0, 00:20:51.441 "w_mbytes_per_sec": 0 00:20:51.441 }, 00:20:51.441 "claimed": false, 00:20:51.441 "zoned": false, 00:20:51.441 "supported_io_types": { 00:20:51.441 "read": true, 00:20:51.441 "write": true, 00:20:51.441 "unmap": true, 00:20:51.441 "flush": false, 00:20:51.441 "reset": true, 00:20:51.441 "nvme_admin": false, 00:20:51.441 "nvme_io": false, 00:20:51.441 "nvme_io_md": false, 00:20:51.441 "write_zeroes": true, 00:20:51.441 "zcopy": false, 00:20:51.441 "get_zone_info": false, 00:20:51.441 "zone_management": false, 00:20:51.441 "zone_append": false, 00:20:51.441 "compare": false, 00:20:51.441 "compare_and_write": false, 00:20:51.441 "abort": false, 00:20:51.441 "seek_hole": true, 00:20:51.441 "seek_data": true, 00:20:51.441 "copy": false, 00:20:51.441 "nvme_iov_md": false 00:20:51.441 }, 00:20:51.441 "driver_specific": { 00:20:51.441 "lvol": { 00:20:51.441 "lvol_store_uuid": "a34b27d6-bbb0-4c5b-bd15-2c519592befd", 00:20:51.441 "base_bdev": "nvme0n1", 00:20:51.441 "thin_provision": true, 00:20:51.441 "num_allocated_clusters": 0, 00:20:51.441 "snapshot": false, 00:20:51.441 "clone": false, 00:20:51.441 "esnap_clone": false 00:20:51.441 } 00:20:51.441 } 00:20:51.441 } 00:20:51.441 ]' 00:20:51.700 08:33:38 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:51.700 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # bs=4096 00:20:51.700 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:51.700 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:51.700 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:51.700 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # echo 103424 00:20:51.700 08:33:39 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:51.700 08:33:39 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:51.960 08:33:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:51.960 08:33:39 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:51.960 08:33:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:51.960 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1370 -- # local bdev_name=bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:51.960 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1371 -- # local bdev_info 00:20:51.960 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1372 -- # local bs 00:20:51.960 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1373 -- # local nb 00:20:51.960 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bef3d361-78b1-4d8a-b7ff-753a36852ca1 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:20:52.220 { 00:20:52.220 "name": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:52.220 "aliases": [ 00:20:52.220 "lvs/nvme0n1p0" 00:20:52.220 ], 00:20:52.220 "product_name": "Logical Volume", 00:20:52.220 "block_size": 4096, 00:20:52.220 "num_blocks": 26476544, 00:20:52.220 "uuid": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:52.220 "assigned_rate_limits": { 00:20:52.220 "rw_ios_per_sec": 0, 00:20:52.220 "rw_mbytes_per_sec": 0, 00:20:52.220 "r_mbytes_per_sec": 0, 00:20:52.220 "w_mbytes_per_sec": 0 00:20:52.220 }, 00:20:52.220 "claimed": false, 00:20:52.220 "zoned": false, 00:20:52.220 "supported_io_types": { 00:20:52.220 "read": true, 00:20:52.220 "write": true, 00:20:52.220 "unmap": true, 00:20:52.220 "flush": false, 00:20:52.220 "reset": true, 00:20:52.220 "nvme_admin": false, 00:20:52.220 "nvme_io": false, 00:20:52.220 "nvme_io_md": false, 00:20:52.220 "write_zeroes": true, 00:20:52.220 "zcopy": false, 00:20:52.220 "get_zone_info": false, 00:20:52.220 "zone_management": false, 00:20:52.220 "zone_append": false, 00:20:52.220 "compare": false, 00:20:52.220 "compare_and_write": false, 00:20:52.220 "abort": false, 00:20:52.220 "seek_hole": true, 00:20:52.220 "seek_data": true, 00:20:52.220 "copy": false, 00:20:52.220 "nvme_iov_md": false 00:20:52.220 }, 00:20:52.220 "driver_specific": { 00:20:52.220 "lvol": { 00:20:52.220 "lvol_store_uuid": "a34b27d6-bbb0-4c5b-bd15-2c519592befd", 00:20:52.220 "base_bdev": "nvme0n1", 00:20:52.220 "thin_provision": true, 00:20:52.220 "num_allocated_clusters": 0, 00:20:52.220 "snapshot": false, 00:20:52.220 "clone": false, 00:20:52.220 "esnap_clone": false 00:20:52.220 } 00:20:52.220 } 00:20:52.220 } 00:20:52.220 ]' 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # bs=4096 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # nb=26476544 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:20:52.220 08:33:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # echo 103424 00:20:52.220 08:33:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:52.220 08:33:39 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bef3d361-78b1-4d8a-b7ff-753a36852ca1 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:52.481 [2024-11-20 08:33:39.787714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.787772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:52.481 [2024-11-20 08:33:39.787790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:52.481 [2024-11-20 08:33:39.787801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.791202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.791242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.481 [2024-11-20 08:33:39.791257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:20:52.481 [2024-11-20 08:33:39.791268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.791417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:52.481 [2024-11-20 08:33:39.792334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:52.481 [2024-11-20 08:33:39.792373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.792384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.481 [2024-11-20 08:33:39.792397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:20:52.481 [2024-11-20 08:33:39.792407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.792538] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 39ff582c-205c-44ba-a49f-16afcb66c63b 00:20:52.481 [2024-11-20 08:33:39.793947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.794118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:52.481 [2024-11-20 08:33:39.794139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:52.481 [2024-11-20 08:33:39.794153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.801569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.801708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.481 [2024-11-20 08:33:39.801733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:20:52.481 [2024-11-20 08:33:39.801746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.801917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.801935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.481 [2024-11-20 08:33:39.801946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:52.481 [2024-11-20 08:33:39.801963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.802050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.802065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:52.481 [2024-11-20 08:33:39.802076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:52.481 [2024-11-20 08:33:39.802089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.802148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:52.481 [2024-11-20 08:33:39.807342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.807374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.481 [2024-11-20 08:33:39.807394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.205 ms 00:20:52.481 [2024-11-20 08:33:39.807404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.807491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.807503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:52.481 [2024-11-20 08:33:39.807516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:52.481 [2024-11-20 08:33:39.807543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.807603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:52.481 [2024-11-20 08:33:39.807728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:52.481 [2024-11-20 08:33:39.807748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:52.481 [2024-11-20 08:33:39.807762] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:52.481 [2024-11-20 08:33:39.807778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:52.481 [2024-11-20 08:33:39.807790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:52.481 [2024-11-20 08:33:39.807804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:52.481 [2024-11-20 08:33:39.807814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:52.481 [2024-11-20 08:33:39.807827] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:52.481 [2024-11-20 08:33:39.807839] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:52.481 [2024-11-20 08:33:39.807853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.807863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:52.481 [2024-11-20 08:33:39.807878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:52.481 [2024-11-20 08:33:39.807888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.808016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.481 [2024-11-20 08:33:39.808028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:52.481 [2024-11-20 08:33:39.808041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:52.481 [2024-11-20 08:33:39.808051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.481 [2024-11-20 08:33:39.808246] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:52.481 [2024-11-20 08:33:39.808260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:52.481 [2024-11-20 08:33:39.808274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.481 [2024-11-20 08:33:39.808285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.481 [2024-11-20 08:33:39.808297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:52.481 [2024-11-20 08:33:39.808307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:52.481 [2024-11-20 08:33:39.808319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:52.482 [2024-11-20 08:33:39.808341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.482 [2024-11-20 08:33:39.808363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:52.482 [2024-11-20 08:33:39.808372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:52.482 [2024-11-20 08:33:39.808384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.482 [2024-11-20 08:33:39.808394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:52.482 [2024-11-20 08:33:39.808406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:52.482 [2024-11-20 08:33:39.808415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:52.482 [2024-11-20 08:33:39.808439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:52.482 [2024-11-20 08:33:39.808473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:52.482 [2024-11-20 08:33:39.808503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:52.482 [2024-11-20 08:33:39.808536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:52.482 [2024-11-20 08:33:39.808566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:52.482 [2024-11-20 08:33:39.808608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.482 [2024-11-20 08:33:39.808630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:52.482 [2024-11-20 08:33:39.808639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:52.482 [2024-11-20 08:33:39.808651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.482 [2024-11-20 08:33:39.808661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:52.482 [2024-11-20 08:33:39.808673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:52.482 [2024-11-20 08:33:39.808682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:52.482 [2024-11-20 08:33:39.808703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:52.482 [2024-11-20 08:33:39.808715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808724] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:52.482 [2024-11-20 08:33:39.808736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:52.482 [2024-11-20 08:33:39.808746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.482 [2024-11-20 08:33:39.808770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:52.482 [2024-11-20 08:33:39.808785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:52.482 [2024-11-20 08:33:39.808795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:52.482 [2024-11-20 08:33:39.808807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:52.482 [2024-11-20 08:33:39.808816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:52.482 [2024-11-20 08:33:39.808828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:52.482 [2024-11-20 08:33:39.808842] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:52.482 [2024-11-20 08:33:39.808858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.808870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:52.482 [2024-11-20 08:33:39.808885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:52.482 [2024-11-20 08:33:39.808896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:52.482 [2024-11-20 08:33:39.808909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:52.482 [2024-11-20 08:33:39.808921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:52.482 [2024-11-20 08:33:39.808933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:52.482 [2024-11-20 08:33:39.808944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:52.482 [2024-11-20 08:33:39.808957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:52.482 [2024-11-20 08:33:39.808967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:52.482 [2024-11-20 08:33:39.808983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:52.482 [2024-11-20 08:33:39.809049] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:52.482 [2024-11-20 08:33:39.809070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:52.482 [2024-11-20 08:33:39.809095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:52.482 [2024-11-20 08:33:39.809105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:52.482 [2024-11-20 08:33:39.809119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:52.482 [2024-11-20 08:33:39.809132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.482 [2024-11-20 08:33:39.809145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:52.482 [2024-11-20 08:33:39.809155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:20:52.482 [2024-11-20 08:33:39.809167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.482 [2024-11-20 08:33:39.809312] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:52.482 [2024-11-20 08:33:39.809333] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:56.671 [2024-11-20 08:33:43.892005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.892069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:56.671 [2024-11-20 08:33:43.892087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4089.322 ms 00:20:56.671 [2024-11-20 08:33:43.892101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.931009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.931063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.671 [2024-11-20 08:33:43.931078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.576 ms 00:20:56.671 [2024-11-20 08:33:43.931092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.931260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.931275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.671 [2024-11-20 08:33:43.931287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:56.671 [2024-11-20 08:33:43.931303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.985332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.985384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.671 [2024-11-20 08:33:43.985402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.030 ms 00:20:56.671 [2024-11-20 08:33:43.985421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.985568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.985588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.671 [2024-11-20 08:33:43.985602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.671 [2024-11-20 08:33:43.985618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.986134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.986159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.671 [2024-11-20 08:33:43.986184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:56.671 [2024-11-20 08:33:43.986200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:43.986354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:43.986370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.671 [2024-11-20 08:33:43.986384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:56.671 [2024-11-20 08:33:43.986403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.008161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.008337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.671 [2024-11-20 08:33:44.008360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.698 ms 00:20:56.671 [2024-11-20 08:33:44.008374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.020386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:56.671 [2024-11-20 08:33:44.036890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.036938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:56.671 [2024-11-20 08:33:44.036955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.397 ms 00:20:56.671 [2024-11-20 08:33:44.036966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.145629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.145686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:56.671 [2024-11-20 08:33:44.145706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.690 ms 00:20:56.671 [2024-11-20 08:33:44.145718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.145969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.145983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:56.671 [2024-11-20 08:33:44.146018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:56.671 [2024-11-20 08:33:44.146029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.182153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.182197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:56.671 [2024-11-20 08:33:44.182214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.113 ms 00:20:56.671 [2024-11-20 08:33:44.182228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.218460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.218496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:56.671 [2024-11-20 08:33:44.218513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.183 ms 00:20:56.671 [2024-11-20 08:33:44.218523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.671 [2024-11-20 08:33:44.219372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.671 [2024-11-20 08:33:44.219397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:56.671 [2024-11-20 08:33:44.219411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:20:56.671 [2024-11-20 08:33:44.219421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.330566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.330616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:56.930 [2024-11-20 08:33:44.330636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.255 ms 00:20:56.930 [2024-11-20 08:33:44.330647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.369435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.369478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:56.930 [2024-11-20 08:33:44.369495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.685 ms 00:20:56.930 [2024-11-20 08:33:44.369506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.405963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.406016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:56.930 [2024-11-20 08:33:44.406033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.404 ms 00:20:56.930 [2024-11-20 08:33:44.406043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.442381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.442421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:56.930 [2024-11-20 08:33:44.442438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.274 ms 00:20:56.930 [2024-11-20 08:33:44.442465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.442584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.442601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:56.930 [2024-11-20 08:33:44.442618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:56.930 [2024-11-20 08:33:44.442628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.442730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.930 [2024-11-20 08:33:44.442741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:56.930 [2024-11-20 08:33:44.442756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:56.930 [2024-11-20 08:33:44.442767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.930 [2024-11-20 08:33:44.443749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:56.930 [2024-11-20 08:33:44.447841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4663.355 ms, result 0 00:20:56.930 [2024-11-20 08:33:44.448973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:56.930 { 00:20:56.930 "name": "ftl0", 00:20:56.930 "uuid": "39ff582c-205c-44ba-a49f-16afcb66c63b" 00:20:56.930 } 00:20:56.930 08:33:44 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # local bdev_name=ftl0 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@907 -- # local bdev_timeout= 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@908 -- # local i 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@909 -- # [[ -z '' ]] 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@909 -- # bdev_timeout=2000 00:20:56.930 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@911 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:57.188 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:57.473 [ 00:20:57.473 { 00:20:57.473 "name": "ftl0", 00:20:57.473 "aliases": [ 00:20:57.473 "39ff582c-205c-44ba-a49f-16afcb66c63b" 00:20:57.473 ], 00:20:57.473 "product_name": "FTL disk", 00:20:57.473 "block_size": 4096, 00:20:57.473 "num_blocks": 23592960, 00:20:57.473 "uuid": "39ff582c-205c-44ba-a49f-16afcb66c63b", 00:20:57.473 "assigned_rate_limits": { 00:20:57.473 "rw_ios_per_sec": 0, 00:20:57.473 "rw_mbytes_per_sec": 0, 00:20:57.473 "r_mbytes_per_sec": 0, 00:20:57.473 "w_mbytes_per_sec": 0 00:20:57.473 }, 00:20:57.473 "claimed": false, 00:20:57.473 "zoned": false, 00:20:57.473 "supported_io_types": { 00:20:57.473 "read": true, 00:20:57.473 "write": true, 00:20:57.473 "unmap": true, 00:20:57.473 "flush": true, 00:20:57.473 "reset": false, 00:20:57.473 "nvme_admin": false, 00:20:57.473 "nvme_io": false, 00:20:57.473 "nvme_io_md": false, 00:20:57.473 "write_zeroes": true, 00:20:57.473 "zcopy": false, 00:20:57.473 "get_zone_info": false, 00:20:57.473 "zone_management": false, 00:20:57.473 "zone_append": false, 00:20:57.473 "compare": false, 00:20:57.473 "compare_and_write": false, 00:20:57.473 "abort": false, 00:20:57.473 "seek_hole": false, 00:20:57.473 "seek_data": false, 00:20:57.473 "copy": false, 00:20:57.473 "nvme_iov_md": false 00:20:57.473 }, 00:20:57.473 "driver_specific": { 00:20:57.473 "ftl": { 00:20:57.473 "base_bdev": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:57.473 "cache": "nvc0n1p0" 00:20:57.473 } 00:20:57.473 } 00:20:57.473 } 00:20:57.473 ] 00:20:57.473 08:33:44 ftl.ftl_trim -- common/autotest_common.sh@914 -- # return 0 00:20:57.473 08:33:44 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:57.473 08:33:44 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:57.732 08:33:45 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:57.732 08:33:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:57.992 08:33:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:57.992 { 00:20:57.992 "name": "ftl0", 00:20:57.992 "aliases": [ 00:20:57.992 "39ff582c-205c-44ba-a49f-16afcb66c63b" 00:20:57.992 ], 00:20:57.992 "product_name": "FTL disk", 00:20:57.992 "block_size": 4096, 00:20:57.992 "num_blocks": 23592960, 00:20:57.992 "uuid": "39ff582c-205c-44ba-a49f-16afcb66c63b", 00:20:57.992 "assigned_rate_limits": { 00:20:57.992 "rw_ios_per_sec": 0, 00:20:57.992 "rw_mbytes_per_sec": 0, 00:20:57.992 "r_mbytes_per_sec": 0, 00:20:57.992 "w_mbytes_per_sec": 0 00:20:57.992 }, 00:20:57.992 "claimed": false, 00:20:57.992 "zoned": false, 00:20:57.992 "supported_io_types": { 00:20:57.992 "read": true, 00:20:57.992 "write": true, 00:20:57.992 "unmap": true, 00:20:57.992 "flush": true, 00:20:57.992 "reset": false, 00:20:57.992 "nvme_admin": false, 00:20:57.992 "nvme_io": false, 00:20:57.992 "nvme_io_md": false, 00:20:57.992 "write_zeroes": true, 00:20:57.992 "zcopy": false, 00:20:57.992 "get_zone_info": false, 00:20:57.992 "zone_management": false, 00:20:57.992 "zone_append": false, 00:20:57.993 "compare": false, 00:20:57.993 "compare_and_write": false, 00:20:57.993 "abort": false, 00:20:57.993 "seek_hole": false, 00:20:57.993 "seek_data": false, 00:20:57.993 "copy": false, 00:20:57.993 "nvme_iov_md": false 00:20:57.993 }, 00:20:57.993 "driver_specific": { 00:20:57.993 "ftl": { 00:20:57.993 "base_bdev": "bef3d361-78b1-4d8a-b7ff-753a36852ca1", 00:20:57.993 "cache": "nvc0n1p0" 00:20:57.993 } 00:20:57.993 } 00:20:57.993 } 00:20:57.993 ]' 00:20:57.993 08:33:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:57.993 08:33:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:57.993 08:33:45 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:57.993 [2024-11-20 08:33:45.531696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.993 [2024-11-20 08:33:45.531753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:57.993 [2024-11-20 08:33:45.531772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.993 [2024-11-20 08:33:45.531786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.993 [2024-11-20 08:33:45.531852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:57.993 [2024-11-20 08:33:45.535903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.993 [2024-11-20 08:33:45.535935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:57.993 [2024-11-20 08:33:45.535956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:20:57.993 [2024-11-20 08:33:45.535966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.993 [2024-11-20 08:33:45.536984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.993 [2024-11-20 08:33:45.537012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:57.993 [2024-11-20 08:33:45.537026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:20:57.993 [2024-11-20 08:33:45.537036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.993 [2024-11-20 08:33:45.539909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.993 [2024-11-20 08:33:45.539934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:57.993 [2024-11-20 08:33:45.539948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:20:57.993 [2024-11-20 08:33:45.539959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.993 [2024-11-20 08:33:45.545632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.993 [2024-11-20 08:33:45.545665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:57.993 [2024-11-20 08:33:45.545680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:20:57.993 [2024-11-20 08:33:45.545690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.581647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.581786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.253 [2024-11-20 08:33:45.581815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.887 ms 00:20:58.253 [2024-11-20 08:33:45.581825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.604566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.604702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.253 [2024-11-20 08:33:45.604733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.667 ms 00:20:58.253 [2024-11-20 08:33:45.604743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.605092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.605108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.253 [2024-11-20 08:33:45.605122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:58.253 [2024-11-20 08:33:45.605132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.642246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.642282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.253 [2024-11-20 08:33:45.642298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.112 ms 00:20:58.253 [2024-11-20 08:33:45.642308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.678786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.678823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.253 [2024-11-20 08:33:45.678842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.431 ms 00:20:58.253 [2024-11-20 08:33:45.678852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.714671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.714711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.253 [2024-11-20 08:33:45.714727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.766 ms 00:20:58.253 [2024-11-20 08:33:45.714736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.751101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.253 [2024-11-20 08:33:45.751133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.253 [2024-11-20 08:33:45.751149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.202 ms 00:20:58.253 [2024-11-20 08:33:45.751159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.253 [2024-11-20 08:33:45.751283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.253 [2024-11-20 08:33:45.751301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.253 [2024-11-20 08:33:45.751925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.751935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.751948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.751959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.751971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.751981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.254 [2024-11-20 08:33:45.752592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.254 [2024-11-20 08:33:45.752607] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:20:58.254 [2024-11-20 08:33:45.752618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.254 [2024-11-20 08:33:45.752630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.254 [2024-11-20 08:33:45.752639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.254 [2024-11-20 08:33:45.752655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.254 [2024-11-20 08:33:45.752664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.254 [2024-11-20 08:33:45.752677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.254 [2024-11-20 08:33:45.752686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.254 [2024-11-20 08:33:45.752697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.254 [2024-11-20 08:33:45.752706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.254 [2024-11-20 08:33:45.752719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.254 [2024-11-20 08:33:45.752729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.254 [2024-11-20 08:33:45.752743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:20:58.254 [2024-11-20 08:33:45.752753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.254 [2024-11-20 08:33:45.773215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.254 [2024-11-20 08:33:45.773349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.254 [2024-11-20 08:33:45.773376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.432 ms 00:20:58.254 [2024-11-20 08:33:45.773386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.255 [2024-11-20 08:33:45.773943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.255 [2024-11-20 08:33:45.773962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.255 [2024-11-20 08:33:45.773977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:58.255 [2024-11-20 08:33:45.774006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.514 [2024-11-20 08:33:45.842679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.514 [2024-11-20 08:33:45.842720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.514 [2024-11-20 08:33:45.842737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.514 [2024-11-20 08:33:45.842747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.514 [2024-11-20 08:33:45.842914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.514 [2024-11-20 08:33:45.842927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.514 [2024-11-20 08:33:45.842941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.515 [2024-11-20 08:33:45.842951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-20 08:33:45.843066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.515 [2024-11-20 08:33:45.843082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.515 [2024-11-20 08:33:45.843098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.515 [2024-11-20 08:33:45.843109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-20 08:33:45.843170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.515 [2024-11-20 08:33:45.843181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.515 [2024-11-20 08:33:45.843193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.515 [2024-11-20 08:33:45.843203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-20 08:33:45.976344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.515 [2024-11-20 08:33:45.976404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.515 [2024-11-20 08:33:45.976422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.515 [2024-11-20 08:33:45.976433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.075795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.075850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.774 [2024-11-20 08:33:46.075867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-11-20 08:33:46.075894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.076021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.076034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.774 [2024-11-20 08:33:46.076070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-11-20 08:33:46.076080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.076205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.076216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.774 [2024-11-20 08:33:46.076228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-11-20 08:33:46.076238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.076409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.076422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.774 [2024-11-20 08:33:46.076435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-11-20 08:33:46.076448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.076557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.076570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.774 [2024-11-20 08:33:46.076584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-11-20 08:33:46.076594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-11-20 08:33:46.076673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-11-20 08:33:46.076687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.774 [2024-11-20 08:33:46.076703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.775 [2024-11-20 08:33:46.076715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.775 [2024-11-20 08:33:46.076791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.775 [2024-11-20 08:33:46.076802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.775 [2024-11-20 08:33:46.076816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.775 [2024-11-20 08:33:46.076826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.775 [2024-11-20 08:33:46.077100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.276 ms, result 0 00:20:58.775 true 00:20:58.775 08:33:46 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75148 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' -z 75148 ']' 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@961 -- # kill -0 75148 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@962 -- # uname 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75148 00:20:58.775 killing process with pid 75148 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75148' 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@976 -- # kill 75148 00:20:58.775 08:33:46 ftl.ftl_trim -- common/autotest_common.sh@981 -- # wait 75148 00:21:04.050 08:33:50 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:04.618 65536+0 records in 00:21:04.618 65536+0 records out 00:21:04.618 268435456 bytes (268 MB, 256 MiB) copied, 1.03108 s, 260 MB/s 00:21:04.618 08:33:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:04.618 [2024-11-20 08:33:52.077055] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:04.618 [2024-11-20 08:33:52.077169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75366 ] 00:21:04.878 [2024-11-20 08:33:52.252498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.878 [2024-11-20 08:33:52.357071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.137 [2024-11-20 08:33:52.668271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:05.137 [2024-11-20 08:33:52.668560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:05.395 [2024-11-20 08:33:52.828938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.395 [2024-11-20 08:33:52.828980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.395 [2024-11-20 08:33:52.829010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.395 [2024-11-20 08:33:52.829020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.395 [2024-11-20 08:33:52.832202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.395 [2024-11-20 08:33:52.832239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.395 [2024-11-20 08:33:52.832252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:21:05.395 [2024-11-20 08:33:52.832262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.395 [2024-11-20 08:33:52.832354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.395 [2024-11-20 08:33:52.833391] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.395 [2024-11-20 08:33:52.833417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.395 [2024-11-20 08:33:52.833427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.395 [2024-11-20 08:33:52.833438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:21:05.395 [2024-11-20 08:33:52.833448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.395 [2024-11-20 08:33:52.834906] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:05.395 [2024-11-20 08:33:52.854744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.395 [2024-11-20 08:33:52.854805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:05.395 [2024-11-20 08:33:52.854819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.871 ms 00:21:05.396 [2024-11-20 08:33:52.854846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.854942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.854957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:05.396 [2024-11-20 08:33:52.854967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:05.396 [2024-11-20 08:33:52.854978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.861725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.861897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.396 [2024-11-20 08:33:52.861918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:21:05.396 [2024-11-20 08:33:52.861929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.862051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.862066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.396 [2024-11-20 08:33:52.862078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:05.396 [2024-11-20 08:33:52.862089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.862119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.862134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.396 [2024-11-20 08:33:52.862145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:05.396 [2024-11-20 08:33:52.862163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.862188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:05.396 [2024-11-20 08:33:52.866880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.866914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.396 [2024-11-20 08:33:52.866926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:21:05.396 [2024-11-20 08:33:52.866951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.867028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.867041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.396 [2024-11-20 08:33:52.867053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:05.396 [2024-11-20 08:33:52.867063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.867085] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:05.396 [2024-11-20 08:33:52.867110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:05.396 [2024-11-20 08:33:52.867144] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:05.396 [2024-11-20 08:33:52.867161] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:05.396 [2024-11-20 08:33:52.867263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.396 [2024-11-20 08:33:52.867277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.396 [2024-11-20 08:33:52.867290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:05.396 [2024-11-20 08:33:52.867303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867319] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867330] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:05.396 [2024-11-20 08:33:52.867340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.396 [2024-11-20 08:33:52.867349] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.396 [2024-11-20 08:33:52.867359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.396 [2024-11-20 08:33:52.867370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.867379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.396 [2024-11-20 08:33:52.867389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:05.396 [2024-11-20 08:33:52.867399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.867482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.396 [2024-11-20 08:33:52.867493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.396 [2024-11-20 08:33:52.867506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:05.396 [2024-11-20 08:33:52.867515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.396 [2024-11-20 08:33:52.867621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.396 [2024-11-20 08:33:52.867633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.396 [2024-11-20 08:33:52.867643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.396 [2024-11-20 08:33:52.867673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.396 [2024-11-20 08:33:52.867701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.396 [2024-11-20 08:33:52.867720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.396 [2024-11-20 08:33:52.867729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:05.396 [2024-11-20 08:33:52.867738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.396 [2024-11-20 08:33:52.867758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.396 [2024-11-20 08:33:52.867768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:05.396 [2024-11-20 08:33:52.867777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.396 [2024-11-20 08:33:52.867795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.396 [2024-11-20 08:33:52.867822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.396 [2024-11-20 08:33:52.867849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.396 [2024-11-20 08:33:52.867875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.396 [2024-11-20 08:33:52.867902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.396 [2024-11-20 08:33:52.867920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.396 [2024-11-20 08:33:52.867929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:05.396 [2024-11-20 08:33:52.867938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.396 [2024-11-20 08:33:52.867947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.396 [2024-11-20 08:33:52.867956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:05.396 [2024-11-20 08:33:52.867965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.396 [2024-11-20 08:33:52.867973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.396 [2024-11-20 08:33:52.867982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:05.396 [2024-11-20 08:33:52.867991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.868000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.396 [2024-11-20 08:33:52.868009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:05.396 [2024-11-20 08:33:52.868018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.868037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.396 [2024-11-20 08:33:52.868048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.396 [2024-11-20 08:33:52.868058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.396 [2024-11-20 08:33:52.868071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.396 [2024-11-20 08:33:52.868081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.396 [2024-11-20 08:33:52.868091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.396 [2024-11-20 08:33:52.868100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.396 [2024-11-20 08:33:52.868109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.396 [2024-11-20 08:33:52.868118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.396 [2024-11-20 08:33:52.868127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.396 [2024-11-20 08:33:52.868137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.396 [2024-11-20 08:33:52.868149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.396 [2024-11-20 08:33:52.868161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:05.397 [2024-11-20 08:33:52.868171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:05.397 [2024-11-20 08:33:52.868181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:05.397 [2024-11-20 08:33:52.868192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:05.397 [2024-11-20 08:33:52.868202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:05.397 [2024-11-20 08:33:52.868212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:05.397 [2024-11-20 08:33:52.868223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:05.397 [2024-11-20 08:33:52.868232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:05.397 [2024-11-20 08:33:52.868242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:05.397 [2024-11-20 08:33:52.868252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:05.397 [2024-11-20 08:33:52.868302] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.397 [2024-11-20 08:33:52.868313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.397 [2024-11-20 08:33:52.868334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.397 [2024-11-20 08:33:52.868344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.397 [2024-11-20 08:33:52.868355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.397 [2024-11-20 08:33:52.868365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.397 [2024-11-20 08:33:52.868375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.397 [2024-11-20 08:33:52.868388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:21:05.397 [2024-11-20 08:33:52.868398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.397 [2024-11-20 08:33:52.906933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.397 [2024-11-20 08:33:52.906970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.397 [2024-11-20 08:33:52.906983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.545 ms 00:21:05.397 [2024-11-20 08:33:52.907007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.397 [2024-11-20 08:33:52.907144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.397 [2024-11-20 08:33:52.907163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.397 [2024-11-20 08:33:52.907173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:05.397 [2024-11-20 08:33:52.907183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:52.978315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:52.978354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.656 [2024-11-20 08:33:52.978367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.225 ms 00:21:05.656 [2024-11-20 08:33:52.978381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:52.978471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:52.978484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.656 [2024-11-20 08:33:52.978494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:05.656 [2024-11-20 08:33:52.978503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:52.978941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:52.978954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.656 [2024-11-20 08:33:52.978964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:21:05.656 [2024-11-20 08:33:52.978979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:52.979125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:52.979139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.656 [2024-11-20 08:33:52.979150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:05.656 [2024-11-20 08:33:52.979160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:52.997885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:52.997919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.656 [2024-11-20 08:33:52.997932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.735 ms 00:21:05.656 [2024-11-20 08:33:52.997957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:53.016611] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:05.656 [2024-11-20 08:33:53.016651] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:05.656 [2024-11-20 08:33:53.016665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:53.016691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:05.656 [2024-11-20 08:33:53.016703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.624 ms 00:21:05.656 [2024-11-20 08:33:53.016712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:53.045969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:53.046016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:05.656 [2024-11-20 08:33:53.046057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.224 ms 00:21:05.656 [2024-11-20 08:33:53.046067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:53.063550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:53.063693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:05.656 [2024-11-20 08:33:53.063729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.433 ms 00:21:05.656 [2024-11-20 08:33:53.063739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.656 [2024-11-20 08:33:53.081156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.656 [2024-11-20 08:33:53.081192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:05.656 [2024-11-20 08:33:53.081203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.345 ms 00:21:05.657 [2024-11-20 08:33:53.081212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.081897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.081915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.657 [2024-11-20 08:33:53.081926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:21:05.657 [2024-11-20 08:33:53.081935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.168312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.168390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:05.657 [2024-11-20 08:33:53.168407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.491 ms 00:21:05.657 [2024-11-20 08:33:53.168418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.179685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:05.657 [2024-11-20 08:33:53.196003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.196234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.657 [2024-11-20 08:33:53.196260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.525 ms 00:21:05.657 [2024-11-20 08:33:53.196272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.196402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.196419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:05.657 [2024-11-20 08:33:53.196430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.657 [2024-11-20 08:33:53.196440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.196494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.196506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.657 [2024-11-20 08:33:53.196516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:05.657 [2024-11-20 08:33:53.196526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.196552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.196563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.657 [2024-11-20 08:33:53.196576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:05.657 [2024-11-20 08:33:53.196586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.657 [2024-11-20 08:33:53.196644] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:05.657 [2024-11-20 08:33:53.196660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.657 [2024-11-20 08:33:53.196670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:05.657 [2024-11-20 08:33:53.196682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:05.657 [2024-11-20 08:33:53.196692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.917 [2024-11-20 08:33:53.233455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.917 [2024-11-20 08:33:53.233500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.917 [2024-11-20 08:33:53.233514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.801 ms 00:21:05.917 [2024-11-20 08:33:53.233524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.917 [2024-11-20 08:33:53.233642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.917 [2024-11-20 08:33:53.233657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.917 [2024-11-20 08:33:53.233668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:05.917 [2024-11-20 08:33:53.233678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.917 [2024-11-20 08:33:53.234573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.917 [2024-11-20 08:33:53.239081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.013 ms, result 0 00:21:05.918 [2024-11-20 08:33:53.239942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.918 [2024-11-20 08:33:53.258566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.857  [2024-11-20T08:33:55.355Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T08:33:56.293Z] Copying: 45/256 [MB] (22 MBps) [2024-11-20T08:33:57.674Z] Copying: 69/256 [MB] (23 MBps) [2024-11-20T08:33:58.613Z] Copying: 92/256 [MB] (23 MBps) [2024-11-20T08:33:59.550Z] Copying: 115/256 [MB] (23 MBps) [2024-11-20T08:34:00.488Z] Copying: 139/256 [MB] (23 MBps) [2024-11-20T08:34:01.425Z] Copying: 162/256 [MB] (23 MBps) [2024-11-20T08:34:02.363Z] Copying: 185/256 [MB] (23 MBps) [2024-11-20T08:34:03.302Z] Copying: 209/256 [MB] (23 MBps) [2024-11-20T08:34:04.685Z] Copying: 232/256 [MB] (22 MBps) [2024-11-20T08:34:04.685Z] Copying: 254/256 [MB] (22 MBps) [2024-11-20T08:34:04.685Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 08:34:04.290832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.124 [2024-11-20 08:34:04.305469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.305509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:17.124 [2024-11-20 08:34:04.305523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:17.124 [2024-11-20 08:34:04.305549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.305572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:17.124 [2024-11-20 08:34:04.309714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.309854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:17.124 [2024-11-20 08:34:04.309876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.133 ms 00:21:17.124 [2024-11-20 08:34:04.309886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.312049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.312084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:17.124 [2024-11-20 08:34:04.312097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:21:17.124 [2024-11-20 08:34:04.312107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.319372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.319409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:17.124 [2024-11-20 08:34:04.319427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.259 ms 00:21:17.124 [2024-11-20 08:34:04.319452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.325137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.325170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:17.124 [2024-11-20 08:34:04.325181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:21:17.124 [2024-11-20 08:34:04.325190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.361297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.361335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:17.124 [2024-11-20 08:34:04.361349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.105 ms 00:21:17.124 [2024-11-20 08:34:04.361358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.382902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.383059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:17.124 [2024-11-20 08:34:04.383087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.526 ms 00:21:17.124 [2024-11-20 08:34:04.383102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.383250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.383263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:17.124 [2024-11-20 08:34:04.383274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:17.124 [2024-11-20 08:34:04.383284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.420285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.420322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:17.124 [2024-11-20 08:34:04.420336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.042 ms 00:21:17.124 [2024-11-20 08:34:04.420345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.455730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.455870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:17.124 [2024-11-20 08:34:04.455890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.389 ms 00:21:17.124 [2024-11-20 08:34:04.455900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.491085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.491121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:17.124 [2024-11-20 08:34:04.491134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.166 ms 00:21:17.124 [2024-11-20 08:34:04.491144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.526830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.124 [2024-11-20 08:34:04.526867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:17.124 [2024-11-20 08:34:04.526880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.653 ms 00:21:17.124 [2024-11-20 08:34:04.526889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.124 [2024-11-20 08:34:04.526942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:17.124 [2024-11-20 08:34:04.526965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.526978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:17.124 [2024-11-20 08:34:04.527132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.527999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.528010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.528021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:17.125 [2024-11-20 08:34:04.528038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:17.125 [2024-11-20 08:34:04.528047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:17.125 [2024-11-20 08:34:04.528058] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:17.126 [2024-11-20 08:34:04.528067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:17.126 [2024-11-20 08:34:04.528076] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:17.126 [2024-11-20 08:34:04.528086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:17.126 [2024-11-20 08:34:04.528095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:17.126 [2024-11-20 08:34:04.528105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:17.126 [2024-11-20 08:34:04.528114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:17.126 [2024-11-20 08:34:04.528123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:17.126 [2024-11-20 08:34:04.528132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:17.126 [2024-11-20 08:34:04.528141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.126 [2024-11-20 08:34:04.528150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:17.126 [2024-11-20 08:34:04.528164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:21:17.126 [2024-11-20 08:34:04.528174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.548605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.126 [2024-11-20 08:34:04.548639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:17.126 [2024-11-20 08:34:04.548652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.444 ms 00:21:17.126 [2024-11-20 08:34:04.548663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.549257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.126 [2024-11-20 08:34:04.549281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:17.126 [2024-11-20 08:34:04.549292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:21:17.126 [2024-11-20 08:34:04.549302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.606623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.126 [2024-11-20 08:34:04.606671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.126 [2024-11-20 08:34:04.606685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.126 [2024-11-20 08:34:04.606696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.606787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.126 [2024-11-20 08:34:04.606803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.126 [2024-11-20 08:34:04.606813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.126 [2024-11-20 08:34:04.606823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.606874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.126 [2024-11-20 08:34:04.606887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.126 [2024-11-20 08:34:04.606898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.126 [2024-11-20 08:34:04.606908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.126 [2024-11-20 08:34:04.606927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.126 [2024-11-20 08:34:04.606938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.126 [2024-11-20 08:34:04.606952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.126 [2024-11-20 08:34:04.606962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.732085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.732132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.386 [2024-11-20 08:34:04.732146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.732157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.833902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.833955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.386 [2024-11-20 08:34:04.833975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.833985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.386 [2024-11-20 08:34:04.834118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.386 [2024-11-20 08:34:04.834185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.386 [2024-11-20 08:34:04.834346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:17.386 [2024-11-20 08:34:04.834415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.386 [2024-11-20 08:34:04.834489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.386 [2024-11-20 08:34:04.834550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.386 [2024-11-20 08:34:04.834560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.386 [2024-11-20 08:34:04.834574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.386 [2024-11-20 08:34:04.834702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.086 ms, result 0 00:21:18.767 00:21:18.767 00:21:18.767 08:34:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75515 00:21:18.767 08:34:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:18.767 08:34:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75515 00:21:18.767 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # '[' -z 75515 ']' 00:21:18.767 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.768 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:18.768 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.768 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:18.768 08:34:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:18.768 [2024-11-20 08:34:06.094882] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:18.768 [2024-11-20 08:34:06.095043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:21:18.768 [2024-11-20 08:34:06.272025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.027 [2024-11-20 08:34:06.382392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.966 08:34:07 ftl.ftl_trim -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:19.967 08:34:07 ftl.ftl_trim -- common/autotest_common.sh@871 -- # return 0 00:21:19.967 08:34:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:19.967 [2024-11-20 08:34:07.455505] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.967 [2024-11-20 08:34:07.455749] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.228 [2024-11-20 08:34:07.638824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.638876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.228 [2024-11-20 08:34:07.638897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.228 [2024-11-20 08:34:07.638908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.642743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.642893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.228 [2024-11-20 08:34:07.642919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:21:20.228 [2024-11-20 08:34:07.642930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.643090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.228 [2024-11-20 08:34:07.644116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.228 [2024-11-20 08:34:07.644145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.644156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.228 [2024-11-20 08:34:07.644169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:21:20.228 [2024-11-20 08:34:07.644179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.645796] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.228 [2024-11-20 08:34:07.665138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.665291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.228 [2024-11-20 08:34:07.665312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.377 ms 00:21:20.228 [2024-11-20 08:34:07.665328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.665425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.665445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.228 [2024-11-20 08:34:07.665457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:20.228 [2024-11-20 08:34:07.665472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.672304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.672468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.228 [2024-11-20 08:34:07.672487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.785 ms 00:21:20.228 [2024-11-20 08:34:07.672503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.672652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.672671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.228 [2024-11-20 08:34:07.672683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:20.228 [2024-11-20 08:34:07.672698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.672738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.672754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.228 [2024-11-20 08:34:07.672765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:20.228 [2024-11-20 08:34:07.672781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.672806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:20.228 [2024-11-20 08:34:07.677700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.677731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.228 [2024-11-20 08:34:07.677748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.902 ms 00:21:20.228 [2024-11-20 08:34:07.677775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.677850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.677862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.228 [2024-11-20 08:34:07.677879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.228 [2024-11-20 08:34:07.677895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.677922] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.228 [2024-11-20 08:34:07.677945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:20.228 [2024-11-20 08:34:07.678014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.228 [2024-11-20 08:34:07.678035] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:20.228 [2024-11-20 08:34:07.678128] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.228 [2024-11-20 08:34:07.678151] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.228 [2024-11-20 08:34:07.678171] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.228 [2024-11-20 08:34:07.678190] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678207] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:20.228 [2024-11-20 08:34:07.678234] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.228 [2024-11-20 08:34:07.678245] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.228 [2024-11-20 08:34:07.678275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.228 [2024-11-20 08:34:07.678286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.678301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.228 [2024-11-20 08:34:07.678312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:21:20.228 [2024-11-20 08:34:07.678327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.678409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.228 [2024-11-20 08:34:07.678426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.228 [2024-11-20 08:34:07.678436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:20.228 [2024-11-20 08:34:07.678451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.228 [2024-11-20 08:34:07.678539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.228 [2024-11-20 08:34:07.678556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.228 [2024-11-20 08:34:07.678567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.228 [2024-11-20 08:34:07.678608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.228 [2024-11-20 08:34:07.678649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.228 [2024-11-20 08:34:07.678673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.228 [2024-11-20 08:34:07.678688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:20.228 [2024-11-20 08:34:07.678699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.228 [2024-11-20 08:34:07.678713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.228 [2024-11-20 08:34:07.678723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:20.228 [2024-11-20 08:34:07.678737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.228 [2024-11-20 08:34:07.678761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.228 [2024-11-20 08:34:07.678806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.228 [2024-11-20 08:34:07.678851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.228 [2024-11-20 08:34:07.678885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.228 [2024-11-20 08:34:07.678923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:20.228 [2024-11-20 08:34:07.678933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.228 [2024-11-20 08:34:07.678949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.229 [2024-11-20 08:34:07.678959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:20.229 [2024-11-20 08:34:07.678973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.229 [2024-11-20 08:34:07.678983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.229 [2024-11-20 08:34:07.679009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:20.229 [2024-11-20 08:34:07.679018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.229 [2024-11-20 08:34:07.679033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.229 [2024-11-20 08:34:07.679042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:20.229 [2024-11-20 08:34:07.679061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.229 [2024-11-20 08:34:07.679071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.229 [2024-11-20 08:34:07.679085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:20.229 [2024-11-20 08:34:07.679095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.229 [2024-11-20 08:34:07.679111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.229 [2024-11-20 08:34:07.679122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.229 [2024-11-20 08:34:07.679150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.229 [2024-11-20 08:34:07.679160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.229 [2024-11-20 08:34:07.679175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.229 [2024-11-20 08:34:07.679186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.229 [2024-11-20 08:34:07.679201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.229 [2024-11-20 08:34:07.679210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.229 [2024-11-20 08:34:07.679224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.229 [2024-11-20 08:34:07.679234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.229 [2024-11-20 08:34:07.679249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.229 [2024-11-20 08:34:07.679262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:20.229 [2024-11-20 08:34:07.679295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:20.229 [2024-11-20 08:34:07.679311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:20.229 [2024-11-20 08:34:07.679322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:20.229 [2024-11-20 08:34:07.679337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:20.229 [2024-11-20 08:34:07.679349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:20.229 [2024-11-20 08:34:07.679364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:20.229 [2024-11-20 08:34:07.679375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:20.229 [2024-11-20 08:34:07.679390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:20.229 [2024-11-20 08:34:07.679401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:20.229 [2024-11-20 08:34:07.679468] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.229 [2024-11-20 08:34:07.679480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.229 [2024-11-20 08:34:07.679512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.229 [2024-11-20 08:34:07.679527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.229 [2024-11-20 08:34:07.679538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.229 [2024-11-20 08:34:07.679554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.679565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.229 [2024-11-20 08:34:07.679578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:21:20.229 [2024-11-20 08:34:07.679588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.720676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.720714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.229 [2024-11-20 08:34:07.720734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.088 ms 00:21:20.229 [2024-11-20 08:34:07.720745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.720869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.720882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.229 [2024-11-20 08:34:07.720896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:20.229 [2024-11-20 08:34:07.720906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.770461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.770498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.229 [2024-11-20 08:34:07.770541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.604 ms 00:21:20.229 [2024-11-20 08:34:07.770552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.770651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.770664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.229 [2024-11-20 08:34:07.770680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.229 [2024-11-20 08:34:07.770690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.771155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.771171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.229 [2024-11-20 08:34:07.771192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:21:20.229 [2024-11-20 08:34:07.771203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.229 [2024-11-20 08:34:07.771326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.229 [2024-11-20 08:34:07.771340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.229 [2024-11-20 08:34:07.771356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:20.229 [2024-11-20 08:34:07.771366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.489 [2024-11-20 08:34:07.793717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.489 [2024-11-20 08:34:07.793754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.489 [2024-11-20 08:34:07.793773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.357 ms 00:21:20.489 [2024-11-20 08:34:07.793784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.489 [2024-11-20 08:34:07.812748] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:20.489 [2024-11-20 08:34:07.812785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.489 [2024-11-20 08:34:07.812821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.489 [2024-11-20 08:34:07.812832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.489 [2024-11-20 08:34:07.812848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.953 ms 00:21:20.489 [2024-11-20 08:34:07.812858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.489 [2024-11-20 08:34:07.842841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.489 [2024-11-20 08:34:07.842892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.489 [2024-11-20 08:34:07.842914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.945 ms 00:21:20.489 [2024-11-20 08:34:07.842925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.489 [2024-11-20 08:34:07.860955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.489 [2024-11-20 08:34:07.861107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.489 [2024-11-20 08:34:07.861141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.946 ms 00:21:20.489 [2024-11-20 08:34:07.861152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.489 [2024-11-20 08:34:07.879574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.489 [2024-11-20 08:34:07.879719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.489 [2024-11-20 08:34:07.879747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.309 ms 00:21:20.490 [2024-11-20 08:34:07.879758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:07.880594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:07.880618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.490 [2024-11-20 08:34:07.880635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:21:20.490 [2024-11-20 08:34:07.880646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:07.978161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:07.978224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.490 [2024-11-20 08:34:07.978246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.637 ms 00:21:20.490 [2024-11-20 08:34:07.978259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:07.989160] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:20.490 [2024-11-20 08:34:08.005298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.005364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.490 [2024-11-20 08:34:08.005386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.988 ms 00:21:20.490 [2024-11-20 08:34:08.005402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.005522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.005542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.490 [2024-11-20 08:34:08.005554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.490 [2024-11-20 08:34:08.005570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.005621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.005638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.490 [2024-11-20 08:34:08.005650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:20.490 [2024-11-20 08:34:08.005665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.005696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.005712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.490 [2024-11-20 08:34:08.005723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.490 [2024-11-20 08:34:08.005740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.005780] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.490 [2024-11-20 08:34:08.005802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.005813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.490 [2024-11-20 08:34:08.005834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:20.490 [2024-11-20 08:34:08.005844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.042252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.042390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.490 [2024-11-20 08:34:08.042437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.428 ms 00:21:20.490 [2024-11-20 08:34:08.042448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.042565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.490 [2024-11-20 08:34:08.042579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.490 [2024-11-20 08:34:08.042595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:20.490 [2024-11-20 08:34:08.042611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.490 [2024-11-20 08:34:08.043516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.490 [2024-11-20 08:34:08.047733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.042 ms, result 0 00:21:20.750 [2024-11-20 08:34:08.048869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:20.750 Some configs were skipped because the RPC state that can call them passed over. 00:21:20.750 08:34:08 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:20.750 [2024-11-20 08:34:08.308459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.750 [2024-11-20 08:34:08.308668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:20.750 [2024-11-20 08:34:08.308758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:21:20.750 [2024-11-20 08:34:08.308810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.750 [2024-11-20 08:34:08.308904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.004 ms, result 0 00:21:21.010 true 00:21:21.010 08:34:08 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:21.010 [2024-11-20 08:34:08.516151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.010 [2024-11-20 08:34:08.516354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:21.010 [2024-11-20 08:34:08.516460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:21:21.010 [2024-11-20 08:34:08.516502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.010 [2024-11-20 08:34:08.516584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.839 ms, result 0 00:21:21.010 true 00:21:21.010 08:34:08 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75515 00:21:21.010 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' -z 75515 ']' 00:21:21.010 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@961 -- # kill -0 75515 00:21:21.010 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@962 -- # uname 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75515 00:21:21.011 killing process with pid 75515 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75515' 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@976 -- # kill 75515 00:21:21.011 08:34:08 ftl.ftl_trim -- common/autotest_common.sh@981 -- # wait 75515 00:21:22.394 [2024-11-20 08:34:09.673490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.673555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.394 [2024-11-20 08:34:09.673571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:22.394 [2024-11-20 08:34:09.673584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.673607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:22.394 [2024-11-20 08:34:09.677763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.677798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.394 [2024-11-20 08:34:09.677815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:21:22.394 [2024-11-20 08:34:09.677826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.678094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.678108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.394 [2024-11-20 08:34:09.678122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:21:22.394 [2024-11-20 08:34:09.678132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.681437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.681470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.394 [2024-11-20 08:34:09.681489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:21:22.394 [2024-11-20 08:34:09.681499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.687019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.687054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.394 [2024-11-20 08:34:09.687068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.490 ms 00:21:22.394 [2024-11-20 08:34:09.687094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.701673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.701707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.394 [2024-11-20 08:34:09.701724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.544 ms 00:21:22.394 [2024-11-20 08:34:09.701759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.712159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.712211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.394 [2024-11-20 08:34:09.712233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.347 ms 00:21:22.394 [2024-11-20 08:34:09.712243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.712384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.712397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.394 [2024-11-20 08:34:09.712411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:22.394 [2024-11-20 08:34:09.712421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.727976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.728023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.394 [2024-11-20 08:34:09.728039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.556 ms 00:21:22.394 [2024-11-20 08:34:09.728048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.743369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.743404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.394 [2024-11-20 08:34:09.743428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.272 ms 00:21:22.394 [2024-11-20 08:34:09.743438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.758103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.758136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.394 [2024-11-20 08:34:09.758181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.632 ms 00:21:22.394 [2024-11-20 08:34:09.758191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.772690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.394 [2024-11-20 08:34:09.772845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.394 [2024-11-20 08:34:09.772875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.439 ms 00:21:22.394 [2024-11-20 08:34:09.772885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.394 [2024-11-20 08:34:09.772941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.394 [2024-11-20 08:34:09.772958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.394 [2024-11-20 08:34:09.772976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.394 [2024-11-20 08:34:09.773003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.394 [2024-11-20 08:34:09.773020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.773982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.395 [2024-11-20 08:34:09.774217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.396 [2024-11-20 08:34:09.774335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.396 [2024-11-20 08:34:09.774360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:22.396 [2024-11-20 08:34:09.774384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.396 [2024-11-20 08:34:09.774406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.396 [2024-11-20 08:34:09.774416] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.396 [2024-11-20 08:34:09.774431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.396 [2024-11-20 08:34:09.774441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.396 [2024-11-20 08:34:09.774456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.396 [2024-11-20 08:34:09.774466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.396 [2024-11-20 08:34:09.774480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.396 [2024-11-20 08:34:09.774489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.396 [2024-11-20 08:34:09.774504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-20 08:34:09.774515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.396 [2024-11-20 08:34:09.774531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:21:22.396 [2024-11-20 08:34:09.774541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.794381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-20 08:34:09.794524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.396 [2024-11-20 08:34:09.794660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.835 ms 00:21:22.396 [2024-11-20 08:34:09.794699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.795303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-20 08:34:09.795400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.396 [2024-11-20 08:34:09.795475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:21:22.396 [2024-11-20 08:34:09.795517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.863598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.396 [2024-11-20 08:34:09.863753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.396 [2024-11-20 08:34:09.863833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.396 [2024-11-20 08:34:09.863870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.864002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.396 [2024-11-20 08:34:09.864042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.396 [2024-11-20 08:34:09.864079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.396 [2024-11-20 08:34:09.864163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.864252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.396 [2024-11-20 08:34:09.864290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.396 [2024-11-20 08:34:09.864331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.396 [2024-11-20 08:34:09.864363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-20 08:34:09.864508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.396 [2024-11-20 08:34:09.864543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.396 [2024-11-20 08:34:09.864579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.396 [2024-11-20 08:34:09.864610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:09.986271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:09.986521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.656 [2024-11-20 08:34:09.986628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:09.986673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.086983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.087221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.656 [2024-11-20 08:34:10.087303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.087346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.087495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.087535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.656 [2024-11-20 08:34:10.087577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.087609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.087733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.087772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.656 [2024-11-20 08:34:10.087809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.087841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.088010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.088051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.656 [2024-11-20 08:34:10.088088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.088119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.088246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.088286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.656 [2024-11-20 08:34:10.088376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.088413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.088484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.088558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.656 [2024-11-20 08:34:10.088669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.088705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.088815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.656 [2024-11-20 08:34:10.088937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.656 [2024-11-20 08:34:10.089050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.656 [2024-11-20 08:34:10.089086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.656 [2024-11-20 08:34:10.089257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.411 ms, result 0 00:21:23.596 08:34:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:23.596 08:34:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.856 [2024-11-20 08:34:11.175022] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:23.856 [2024-11-20 08:34:11.175158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75579 ] 00:21:23.856 [2024-11-20 08:34:11.345659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.116 [2024-11-20 08:34:11.463710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.375 [2024-11-20 08:34:11.828504] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.375 [2024-11-20 08:34:11.828569] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.637 [2024-11-20 08:34:11.990373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:11.990594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.637 [2024-11-20 08:34:11.990618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.637 [2024-11-20 08:34:11.990630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:11.993681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:11.993818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.637 [2024-11-20 08:34:11.993839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:21:24.637 [2024-11-20 08:34:11.993850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:11.994018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.637 [2024-11-20 08:34:11.994962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.637 [2024-11-20 08:34:11.994999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:11.995011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.637 [2024-11-20 08:34:11.995022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:21:24.637 [2024-11-20 08:34:11.995033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:11.996508] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.637 [2024-11-20 08:34:12.015163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.015205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.637 [2024-11-20 08:34:12.015220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.685 ms 00:21:24.637 [2024-11-20 08:34:12.015230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.015325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.015339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.637 [2024-11-20 08:34:12.015351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:24.637 [2024-11-20 08:34:12.015361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.022111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.022145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.637 [2024-11-20 08:34:12.022157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.720 ms 00:21:24.637 [2024-11-20 08:34:12.022183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.022278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.022293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.637 [2024-11-20 08:34:12.022305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:24.637 [2024-11-20 08:34:12.022315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.022343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.022357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.637 [2024-11-20 08:34:12.022367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:24.637 [2024-11-20 08:34:12.022377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.022400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:24.637 [2024-11-20 08:34:12.027365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.027496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.637 [2024-11-20 08:34:12.027621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:21:24.637 [2024-11-20 08:34:12.027657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.027761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.027797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.637 [2024-11-20 08:34:12.027882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.637 [2024-11-20 08:34:12.027917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.027966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.637 [2024-11-20 08:34:12.028040] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.637 [2024-11-20 08:34:12.028163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.637 [2024-11-20 08:34:12.028220] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.637 [2024-11-20 08:34:12.028346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.637 [2024-11-20 08:34:12.028471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.637 [2024-11-20 08:34:12.028522] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.637 [2024-11-20 08:34:12.028618] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.637 [2024-11-20 08:34:12.028676] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.637 [2024-11-20 08:34:12.028772] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:24.637 [2024-11-20 08:34:12.028788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.637 [2024-11-20 08:34:12.028798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.637 [2024-11-20 08:34:12.028808] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.637 [2024-11-20 08:34:12.028819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.028829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.637 [2024-11-20 08:34:12.028841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:21:24.637 [2024-11-20 08:34:12.028851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.028936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.637 [2024-11-20 08:34:12.028946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.637 [2024-11-20 08:34:12.028962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:24.637 [2024-11-20 08:34:12.028972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.637 [2024-11-20 08:34:12.029076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.637 [2024-11-20 08:34:12.029090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.637 [2024-11-20 08:34:12.029101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.637 [2024-11-20 08:34:12.029111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.637 [2024-11-20 08:34:12.029124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.637 [2024-11-20 08:34:12.029134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.637 [2024-11-20 08:34:12.029143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:24.637 [2024-11-20 08:34:12.029153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.637 [2024-11-20 08:34:12.029163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:24.637 [2024-11-20 08:34:12.029172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.637 [2024-11-20 08:34:12.029182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.638 [2024-11-20 08:34:12.029191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:24.638 [2024-11-20 08:34:12.029200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.638 [2024-11-20 08:34:12.029219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.638 [2024-11-20 08:34:12.029229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:24.638 [2024-11-20 08:34:12.029238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.638 [2024-11-20 08:34:12.029257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.638 [2024-11-20 08:34:12.029284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.638 [2024-11-20 08:34:12.029312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.638 [2024-11-20 08:34:12.029340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.638 [2024-11-20 08:34:12.029367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.638 [2024-11-20 08:34:12.029394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.638 [2024-11-20 08:34:12.029411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.638 [2024-11-20 08:34:12.029420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:24.638 [2024-11-20 08:34:12.029430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.638 [2024-11-20 08:34:12.029440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.638 [2024-11-20 08:34:12.029449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:24.638 [2024-11-20 08:34:12.029457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.638 [2024-11-20 08:34:12.029475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:24.638 [2024-11-20 08:34:12.029484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029493] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.638 [2024-11-20 08:34:12.029503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.638 [2024-11-20 08:34:12.029513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.638 [2024-11-20 08:34:12.029536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.638 [2024-11-20 08:34:12.029545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.638 [2024-11-20 08:34:12.029555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.638 [2024-11-20 08:34:12.029564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.638 [2024-11-20 08:34:12.029573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.638 [2024-11-20 08:34:12.029582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.638 [2024-11-20 08:34:12.029594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.638 [2024-11-20 08:34:12.029607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:24.638 [2024-11-20 08:34:12.029629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:24.638 [2024-11-20 08:34:12.029640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:24.638 [2024-11-20 08:34:12.029650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:24.638 [2024-11-20 08:34:12.029661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:24.638 [2024-11-20 08:34:12.029671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:24.638 [2024-11-20 08:34:12.029681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:24.638 [2024-11-20 08:34:12.029692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:24.638 [2024-11-20 08:34:12.029702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:24.638 [2024-11-20 08:34:12.029712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:24.638 [2024-11-20 08:34:12.029763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.638 [2024-11-20 08:34:12.029774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.638 [2024-11-20 08:34:12.029796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.638 [2024-11-20 08:34:12.029806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.638 [2024-11-20 08:34:12.029817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.638 [2024-11-20 08:34:12.029828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.029838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.638 [2024-11-20 08:34:12.029856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:21:24.638 [2024-11-20 08:34:12.029866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.069876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.070057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.638 [2024-11-20 08:34:12.070193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.021 ms 00:21:24.638 [2024-11-20 08:34:12.070231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.070385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.070455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.638 [2024-11-20 08:34:12.070529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:24.638 [2024-11-20 08:34:12.070558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.145863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.146055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.638 [2024-11-20 08:34:12.146149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.384 ms 00:21:24.638 [2024-11-20 08:34:12.146196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.146329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.146366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.638 [2024-11-20 08:34:12.146398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.638 [2024-11-20 08:34:12.146427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.146968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.147077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.638 [2024-11-20 08:34:12.147149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:21:24.638 [2024-11-20 08:34:12.147192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.147337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.147379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.638 [2024-11-20 08:34:12.147479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:24.638 [2024-11-20 08:34:12.147515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.168309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.638 [2024-11-20 08:34:12.168438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.638 [2024-11-20 08:34:12.168509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.777 ms 00:21:24.638 [2024-11-20 08:34:12.168544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.638 [2024-11-20 08:34:12.186994] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:24.639 [2024-11-20 08:34:12.187143] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.639 [2024-11-20 08:34:12.187234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.639 [2024-11-20 08:34:12.187267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.639 [2024-11-20 08:34:12.187298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.590 ms 00:21:24.639 [2024-11-20 08:34:12.187327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.217468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.217621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.899 [2024-11-20 08:34:12.217704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.090 ms 00:21:24.899 [2024-11-20 08:34:12.217739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.236164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.236317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.899 [2024-11-20 08:34:12.236433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.304 ms 00:21:24.899 [2024-11-20 08:34:12.236471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.254332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.254459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.899 [2024-11-20 08:34:12.254549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:21:24.899 [2024-11-20 08:34:12.254564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.255294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.255315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.899 [2024-11-20 08:34:12.255326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:21:24.899 [2024-11-20 08:34:12.255336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.341004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.341071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.899 [2024-11-20 08:34:12.341087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.779 ms 00:21:24.899 [2024-11-20 08:34:12.341098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.351923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:24.899 [2024-11-20 08:34:12.368365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.368418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.899 [2024-11-20 08:34:12.368433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.207 ms 00:21:24.899 [2024-11-20 08:34:12.368458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.368593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.368607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.899 [2024-11-20 08:34:12.368618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:24.899 [2024-11-20 08:34:12.368628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.368682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.368694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.899 [2024-11-20 08:34:12.368704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:24.899 [2024-11-20 08:34:12.368714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.368741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.368756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.899 [2024-11-20 08:34:12.368766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:24.899 [2024-11-20 08:34:12.368776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.368810] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.899 [2024-11-20 08:34:12.368822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.368832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.899 [2024-11-20 08:34:12.368842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:24.899 [2024-11-20 08:34:12.368851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.405536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.405709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.899 [2024-11-20 08:34:12.405730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.724 ms 00:21:24.899 [2024-11-20 08:34:12.405741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.405856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.899 [2024-11-20 08:34:12.405870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.899 [2024-11-20 08:34:12.405882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:24.899 [2024-11-20 08:34:12.405892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.899 [2024-11-20 08:34:12.406795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.899 [2024-11-20 08:34:12.411084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.810 ms, result 0 00:21:24.899 [2024-11-20 08:34:12.411946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:24.899 [2024-11-20 08:34:12.430559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.281  [2024-11-20T08:34:14.783Z] Copying: 26/256 [MB] (26 MBps) [2024-11-20T08:34:15.723Z] Copying: 49/256 [MB] (22 MBps) [2024-11-20T08:34:16.661Z] Copying: 73/256 [MB] (23 MBps) [2024-11-20T08:34:17.607Z] Copying: 95/256 [MB] (22 MBps) [2024-11-20T08:34:18.613Z] Copying: 118/256 [MB] (23 MBps) [2024-11-20T08:34:19.550Z] Copying: 141/256 [MB] (22 MBps) [2024-11-20T08:34:20.488Z] Copying: 164/256 [MB] (23 MBps) [2024-11-20T08:34:21.428Z] Copying: 188/256 [MB] (23 MBps) [2024-11-20T08:34:22.810Z] Copying: 212/256 [MB] (23 MBps) [2024-11-20T08:34:23.379Z] Copying: 236/256 [MB] (23 MBps) [2024-11-20T08:34:23.379Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 08:34:23.216148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:35.818 [2024-11-20 08:34:23.230542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.230597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:35.818 [2024-11-20 08:34:23.230613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:35.818 [2024-11-20 08:34:23.230637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.230660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:35.818 [2024-11-20 08:34:23.234868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.234900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:35.818 [2024-11-20 08:34:23.234912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:21:35.818 [2024-11-20 08:34:23.234921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.235160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.235174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:35.818 [2024-11-20 08:34:23.235185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:21:35.818 [2024-11-20 08:34:23.235195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.238053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.238202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:35.818 [2024-11-20 08:34:23.238222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:21:35.818 [2024-11-20 08:34:23.238233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.244008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.244039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:35.818 [2024-11-20 08:34:23.244053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.760 ms 00:21:35.818 [2024-11-20 08:34:23.244063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.280660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.280700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:35.818 [2024-11-20 08:34:23.280715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:21:35.818 [2024-11-20 08:34:23.280725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.302363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.302407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:35.818 [2024-11-20 08:34:23.302421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.618 ms 00:21:35.818 [2024-11-20 08:34:23.302434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.302561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.302575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:35.818 [2024-11-20 08:34:23.302586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:35.818 [2024-11-20 08:34:23.302596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.818 [2024-11-20 08:34:23.339293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.818 [2024-11-20 08:34:23.339457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:35.818 [2024-11-20 08:34:23.339476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.726 ms 00:21:35.818 [2024-11-20 08:34:23.339486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.078 [2024-11-20 08:34:23.374831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.078 [2024-11-20 08:34:23.374867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:36.078 [2024-11-20 08:34:23.374879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.348 ms 00:21:36.078 [2024-11-20 08:34:23.374889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.078 [2024-11-20 08:34:23.410534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.078 [2024-11-20 08:34:23.410570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:36.078 [2024-11-20 08:34:23.410583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.635 ms 00:21:36.078 [2024-11-20 08:34:23.410593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.078 [2024-11-20 08:34:23.446530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.078 [2024-11-20 08:34:23.446566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:36.078 [2024-11-20 08:34:23.446580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.916 ms 00:21:36.079 [2024-11-20 08:34:23.446589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.079 [2024-11-20 08:34:23.446646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:36.079 [2024-11-20 08:34:23.446663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.446982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:36.079 [2024-11-20 08:34:23.447746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:36.079 [2024-11-20 08:34:23.447756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:36.079 [2024-11-20 08:34:23.447767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:36.079 [2024-11-20 08:34:23.447777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:36.079 [2024-11-20 08:34:23.447786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:36.079 [2024-11-20 08:34:23.447796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:36.079 [2024-11-20 08:34:23.447805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:36.080 [2024-11-20 08:34:23.447815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:36.080 [2024-11-20 08:34:23.447825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:36.080 [2024-11-20 08:34:23.447834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:36.080 [2024-11-20 08:34:23.447842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:36.080 [2024-11-20 08:34:23.447852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.080 [2024-11-20 08:34:23.447866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:36.080 [2024-11-20 08:34:23.447876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:21:36.080 [2024-11-20 08:34:23.447886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.467951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.080 [2024-11-20 08:34:23.467985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:36.080 [2024-11-20 08:34:23.468018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.078 ms 00:21:36.080 [2024-11-20 08:34:23.468028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.468610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.080 [2024-11-20 08:34:23.468627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:36.080 [2024-11-20 08:34:23.468638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:36.080 [2024-11-20 08:34:23.468648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.524807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.080 [2024-11-20 08:34:23.524843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.080 [2024-11-20 08:34:23.524856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.080 [2024-11-20 08:34:23.524882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.524981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.080 [2024-11-20 08:34:23.525010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:36.080 [2024-11-20 08:34:23.525020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.080 [2024-11-20 08:34:23.525030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.525077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.080 [2024-11-20 08:34:23.525090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:36.080 [2024-11-20 08:34:23.525100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.080 [2024-11-20 08:34:23.525110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.080 [2024-11-20 08:34:23.525144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.080 [2024-11-20 08:34:23.525173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:36.080 [2024-11-20 08:34:23.525183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.080 [2024-11-20 08:34:23.525193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.649638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.649710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:36.339 [2024-11-20 08:34:23.649726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.649753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.749515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.749572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:36.339 [2024-11-20 08:34:23.749587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.749614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.749708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.749721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:36.339 [2024-11-20 08:34:23.749732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.749742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.749770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.749781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:36.339 [2024-11-20 08:34:23.749795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.749805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.749925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.749938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:36.339 [2024-11-20 08:34:23.749949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.749958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.749994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.750222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:36.339 [2024-11-20 08:34:23.750266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.750304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.750373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.750477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:36.339 [2024-11-20 08:34:23.750508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.750538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.750650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.339 [2024-11-20 08:34:23.750758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:36.339 [2024-11-20 08:34:23.750781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.339 [2024-11-20 08:34:23.750791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.339 [2024-11-20 08:34:23.750940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.232 ms, result 0 00:21:37.278 00:21:37.278 00:21:37.279 08:34:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:37.279 08:34:24 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:37.849 08:34:25 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.849 [2024-11-20 08:34:25.285500] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:37.849 [2024-11-20 08:34:25.285627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75728 ] 00:21:38.109 [2024-11-20 08:34:25.462775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.109 [2024-11-20 08:34:25.577090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.679 [2024-11-20 08:34:25.933302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.679 [2024-11-20 08:34:25.933381] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.679 [2024-11-20 08:34:26.094699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.094755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.679 [2024-11-20 08:34:26.094771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.679 [2024-11-20 08:34:26.094782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.097768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.097807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.679 [2024-11-20 08:34:26.097819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.970 ms 00:21:38.679 [2024-11-20 08:34:26.097829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.097922] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.679 [2024-11-20 08:34:26.098930] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.679 [2024-11-20 08:34:26.098959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.098970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.679 [2024-11-20 08:34:26.098981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:21:38.679 [2024-11-20 08:34:26.099009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.100490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:38.679 [2024-11-20 08:34:26.119307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.119352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:38.679 [2024-11-20 08:34:26.119366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.849 ms 00:21:38.679 [2024-11-20 08:34:26.119377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.119472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.119485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:38.679 [2024-11-20 08:34:26.119496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:38.679 [2024-11-20 08:34:26.119506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.126324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.126359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.679 [2024-11-20 08:34:26.126370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.790 ms 00:21:38.679 [2024-11-20 08:34:26.126381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.126477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.126491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.679 [2024-11-20 08:34:26.126502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:38.679 [2024-11-20 08:34:26.126513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.126541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.126555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.679 [2024-11-20 08:34:26.126566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:38.679 [2024-11-20 08:34:26.126575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.126598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:38.679 [2024-11-20 08:34:26.131378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.131411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.679 [2024-11-20 08:34:26.131439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.793 ms 00:21:38.679 [2024-11-20 08:34:26.131450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.131514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.131526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.679 [2024-11-20 08:34:26.131537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:38.679 [2024-11-20 08:34:26.131547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.131566] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:38.679 [2024-11-20 08:34:26.131591] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:38.679 [2024-11-20 08:34:26.131631] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:38.679 [2024-11-20 08:34:26.131650] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:38.679 [2024-11-20 08:34:26.131738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.679 [2024-11-20 08:34:26.131751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.679 [2024-11-20 08:34:26.131764] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.679 [2024-11-20 08:34:26.131776] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.679 [2024-11-20 08:34:26.131792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.679 [2024-11-20 08:34:26.131803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:38.679 [2024-11-20 08:34:26.131813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.679 [2024-11-20 08:34:26.131823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.679 [2024-11-20 08:34:26.131833] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.679 [2024-11-20 08:34:26.131843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.131853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.679 [2024-11-20 08:34:26.131863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:38.679 [2024-11-20 08:34:26.131873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.131948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.679 [2024-11-20 08:34:26.131959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.679 [2024-11-20 08:34:26.131972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:38.679 [2024-11-20 08:34:26.131982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.679 [2024-11-20 08:34:26.132085] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.679 [2024-11-20 08:34:26.132102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.679 [2024-11-20 08:34:26.132113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.679 [2024-11-20 08:34:26.132143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.679 [2024-11-20 08:34:26.132172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.679 [2024-11-20 08:34:26.132191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.679 [2024-11-20 08:34:26.132200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:38.679 [2024-11-20 08:34:26.132209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.679 [2024-11-20 08:34:26.132228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.679 [2024-11-20 08:34:26.132238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:38.679 [2024-11-20 08:34:26.132247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.679 [2024-11-20 08:34:26.132265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.679 [2024-11-20 08:34:26.132292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.679 [2024-11-20 08:34:26.132320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.679 [2024-11-20 08:34:26.132346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.679 [2024-11-20 08:34:26.132373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.679 [2024-11-20 08:34:26.132391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.679 [2024-11-20 08:34:26.132400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:38.679 [2024-11-20 08:34:26.132409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.679 [2024-11-20 08:34:26.132418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.679 [2024-11-20 08:34:26.132427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:38.680 [2024-11-20 08:34:26.132436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.680 [2024-11-20 08:34:26.132445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.680 [2024-11-20 08:34:26.132455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:38.680 [2024-11-20 08:34:26.132464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.680 [2024-11-20 08:34:26.132473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.680 [2024-11-20 08:34:26.132482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:38.680 [2024-11-20 08:34:26.132491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.680 [2024-11-20 08:34:26.132500] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.680 [2024-11-20 08:34:26.132510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.680 [2024-11-20 08:34:26.132520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.680 [2024-11-20 08:34:26.132533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.680 [2024-11-20 08:34:26.132543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.680 [2024-11-20 08:34:26.132552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.680 [2024-11-20 08:34:26.132561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.680 [2024-11-20 08:34:26.132570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.680 [2024-11-20 08:34:26.132579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.680 [2024-11-20 08:34:26.132588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.680 [2024-11-20 08:34:26.132598] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.680 [2024-11-20 08:34:26.132610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:38.680 [2024-11-20 08:34:26.132631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:38.680 [2024-11-20 08:34:26.132641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:38.680 [2024-11-20 08:34:26.132651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:38.680 [2024-11-20 08:34:26.132661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:38.680 [2024-11-20 08:34:26.132671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:38.680 [2024-11-20 08:34:26.132681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:38.680 [2024-11-20 08:34:26.132691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:38.680 [2024-11-20 08:34:26.132701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:38.680 [2024-11-20 08:34:26.132711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:38.680 [2024-11-20 08:34:26.132762] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.680 [2024-11-20 08:34:26.132776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.680 [2024-11-20 08:34:26.132798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.680 [2024-11-20 08:34:26.132808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.680 [2024-11-20 08:34:26.132818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.680 [2024-11-20 08:34:26.132828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.132838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.680 [2024-11-20 08:34:26.132853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:21:38.680 [2024-11-20 08:34:26.132862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.172396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.172437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.680 [2024-11-20 08:34:26.172451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.546 ms 00:21:38.680 [2024-11-20 08:34:26.172462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.172581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.172599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.680 [2024-11-20 08:34:26.172610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:38.680 [2024-11-20 08:34:26.172620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.232531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.232570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.680 [2024-11-20 08:34:26.232598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.986 ms 00:21:38.680 [2024-11-20 08:34:26.232612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.232706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.232719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.680 [2024-11-20 08:34:26.232730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:38.680 [2024-11-20 08:34:26.232740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.233198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.233213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.680 [2024-11-20 08:34:26.233224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:21:38.680 [2024-11-20 08:34:26.233240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.680 [2024-11-20 08:34:26.233356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.680 [2024-11-20 08:34:26.233370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.680 [2024-11-20 08:34:26.233380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:38.680 [2024-11-20 08:34:26.233390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.940 [2024-11-20 08:34:26.252927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.940 [2024-11-20 08:34:26.252965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.940 [2024-11-20 08:34:26.252978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.546 ms 00:21:38.940 [2024-11-20 08:34:26.252997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.940 [2024-11-20 08:34:26.271850] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:38.940 [2024-11-20 08:34:26.271891] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:38.940 [2024-11-20 08:34:26.271906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.271933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:38.941 [2024-11-20 08:34:26.271944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.818 ms 00:21:38.941 [2024-11-20 08:34:26.271955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.301213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.301264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:38.941 [2024-11-20 08:34:26.301278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.212 ms 00:21:38.941 [2024-11-20 08:34:26.301288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.318808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.318847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:38.941 [2024-11-20 08:34:26.318875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.470 ms 00:21:38.941 [2024-11-20 08:34:26.318884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.336431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.336469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:38.941 [2024-11-20 08:34:26.336482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.503 ms 00:21:38.941 [2024-11-20 08:34:26.336507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.337247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.337272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:38.941 [2024-11-20 08:34:26.337283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:21:38.941 [2024-11-20 08:34:26.337293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.421910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.421972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.941 [2024-11-20 08:34:26.421994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.727 ms 00:21:38.941 [2024-11-20 08:34:26.422021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.432692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:38.941 [2024-11-20 08:34:26.448628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.448674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.941 [2024-11-20 08:34:26.448705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.556 ms 00:21:38.941 [2024-11-20 08:34:26.448715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.448839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.448852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.941 [2024-11-20 08:34:26.448863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:38.941 [2024-11-20 08:34:26.448873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.448923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.448934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.941 [2024-11-20 08:34:26.448945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:38.941 [2024-11-20 08:34:26.448954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.448980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.448994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.941 [2024-11-20 08:34:26.449037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.941 [2024-11-20 08:34:26.449048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.449085] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.941 [2024-11-20 08:34:26.449097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.449107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.941 [2024-11-20 08:34:26.449117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:38.941 [2024-11-20 08:34:26.449127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.484819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.484860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.941 [2024-11-20 08:34:26.484890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.730 ms 00:21:38.941 [2024-11-20 08:34:26.484900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.485038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.941 [2024-11-20 08:34:26.485053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.941 [2024-11-20 08:34:26.485064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:38.941 [2024-11-20 08:34:26.485074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.941 [2024-11-20 08:34:26.486023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.941 [2024-11-20 08:34:26.490418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.668 ms, result 0 00:21:38.941 [2024-11-20 08:34:26.491379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.201 [2024-11-20 08:34:26.509754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.201  [2024-11-20T08:34:26.762Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-20 08:34:26.690550] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.201 [2024-11-20 08:34:26.705144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.705188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:39.201 [2024-11-20 08:34:26.705219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:39.201 [2024-11-20 08:34:26.705235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.201 [2024-11-20 08:34:26.705258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:39.201 [2024-11-20 08:34:26.709318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.709344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:39.201 [2024-11-20 08:34:26.709357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:21:39.201 [2024-11-20 08:34:26.709382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.201 [2024-11-20 08:34:26.711610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.711648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:39.201 [2024-11-20 08:34:26.711660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:21:39.201 [2024-11-20 08:34:26.711670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.201 [2024-11-20 08:34:26.714858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.714899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:39.201 [2024-11-20 08:34:26.714911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.175 ms 00:21:39.201 [2024-11-20 08:34:26.714921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.201 [2024-11-20 08:34:26.720463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.720495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:39.201 [2024-11-20 08:34:26.720507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.520 ms 00:21:39.201 [2024-11-20 08:34:26.720532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.201 [2024-11-20 08:34:26.756560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.201 [2024-11-20 08:34:26.756599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:39.201 [2024-11-20 08:34:26.756628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.009 ms 00:21:39.201 [2024-11-20 08:34:26.756638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.462 [2024-11-20 08:34:26.778372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.778418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:39.463 [2024-11-20 08:34:26.778436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.699 ms 00:21:39.463 [2024-11-20 08:34:26.778446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.778573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.778586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:39.463 [2024-11-20 08:34:26.778598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:39.463 [2024-11-20 08:34:26.778607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.815043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.815083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:39.463 [2024-11-20 08:34:26.815096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.466 ms 00:21:39.463 [2024-11-20 08:34:26.815106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.850805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.850845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:39.463 [2024-11-20 08:34:26.850857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.705 ms 00:21:39.463 [2024-11-20 08:34:26.850866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.886110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.886157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:39.463 [2024-11-20 08:34:26.886170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.249 ms 00:21:39.463 [2024-11-20 08:34:26.886180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.922233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.463 [2024-11-20 08:34:26.922271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:39.463 [2024-11-20 08:34:26.922284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.034 ms 00:21:39.463 [2024-11-20 08:34:26.922293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.463 [2024-11-20 08:34:26.922346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:39.463 [2024-11-20 08:34:26.922363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.922985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:39.463 [2024-11-20 08:34:26.923064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:39.464 [2024-11-20 08:34:26.923426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:39.464 [2024-11-20 08:34:26.923436] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:39.464 [2024-11-20 08:34:26.923447] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:39.464 [2024-11-20 08:34:26.923456] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:39.464 [2024-11-20 08:34:26.923465] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:39.464 [2024-11-20 08:34:26.923475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:39.464 [2024-11-20 08:34:26.923485] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:39.464 [2024-11-20 08:34:26.923495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:39.464 [2024-11-20 08:34:26.923505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:39.464 [2024-11-20 08:34:26.923514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:39.464 [2024-11-20 08:34:26.923522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:39.464 [2024-11-20 08:34:26.923532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.464 [2024-11-20 08:34:26.923546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:39.464 [2024-11-20 08:34:26.923557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:21:39.464 [2024-11-20 08:34:26.923566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.943030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.464 [2024-11-20 08:34:26.943066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:39.464 [2024-11-20 08:34:26.943078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.476 ms 00:21:39.464 [2024-11-20 08:34:26.943088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.943680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.464 [2024-11-20 08:34:26.943698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:39.464 [2024-11-20 08:34:26.943709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:39.464 [2024-11-20 08:34:26.943719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.999418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.464 [2024-11-20 08:34:26.999457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.464 [2024-11-20 08:34:26.999470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.464 [2024-11-20 08:34:26.999480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.999591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.464 [2024-11-20 08:34:26.999603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.464 [2024-11-20 08:34:26.999614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.464 [2024-11-20 08:34:26.999624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.999680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.464 [2024-11-20 08:34:26.999692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.464 [2024-11-20 08:34:26.999703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.464 [2024-11-20 08:34:26.999712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.464 [2024-11-20 08:34:26.999730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.464 [2024-11-20 08:34:26.999744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.464 [2024-11-20 08:34:26.999754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.464 [2024-11-20 08:34:26.999763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.123397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.123455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.725 [2024-11-20 08:34:27.123470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.123496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.222478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.222538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.725 [2024-11-20 08:34:27.222553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.222563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.222632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.222644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.725 [2024-11-20 08:34:27.222654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.222665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.222693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.222704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.725 [2024-11-20 08:34:27.222720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.222730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.222853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.222866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.725 [2024-11-20 08:34:27.222877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.222887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.222922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.222934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:39.725 [2024-11-20 08:34:27.222944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.222958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.223015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.223026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.725 [2024-11-20 08:34:27.223036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.223046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.223090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.725 [2024-11-20 08:34:27.223101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.725 [2024-11-20 08:34:27.223115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.725 [2024-11-20 08:34:27.223125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.725 [2024-11-20 08:34:27.223255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 518.943 ms, result 0 00:21:41.105 00:21:41.105 00:21:41.105 08:34:28 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75765 00:21:41.105 08:34:28 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:41.105 08:34:28 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75765 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # '[' -z 75765 ']' 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:41.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:41.105 08:34:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:41.105 [2024-11-20 08:34:28.362464] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:41.105 [2024-11-20 08:34:28.362595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75765 ] 00:21:41.105 [2024-11-20 08:34:28.540152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.105 [2024-11-20 08:34:28.653570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.067 08:34:29 ftl.ftl_trim -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:42.067 08:34:29 ftl.ftl_trim -- common/autotest_common.sh@871 -- # return 0 00:21:42.067 08:34:29 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:42.326 [2024-11-20 08:34:29.741136] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:42.326 [2024-11-20 08:34:29.741200] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:42.650 [2024-11-20 08:34:29.924685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.924744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:42.650 [2024-11-20 08:34:29.924765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:42.650 [2024-11-20 08:34:29.924776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.928760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.928804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.650 [2024-11-20 08:34:29.928819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.969 ms 00:21:42.650 [2024-11-20 08:34:29.928829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.928955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:42.650 [2024-11-20 08:34:29.930017] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:42.650 [2024-11-20 08:34:29.930054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.930065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.650 [2024-11-20 08:34:29.930078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:21:42.650 [2024-11-20 08:34:29.930089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.931557] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:42.650 [2024-11-20 08:34:29.951337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.951382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:42.650 [2024-11-20 08:34:29.951413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.816 ms 00:21:42.650 [2024-11-20 08:34:29.951426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.951520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.951536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:42.650 [2024-11-20 08:34:29.951548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:42.650 [2024-11-20 08:34:29.951560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.958337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.958375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.650 [2024-11-20 08:34:29.958387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:21:42.650 [2024-11-20 08:34:29.958400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.958506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.958522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.650 [2024-11-20 08:34:29.958533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:42.650 [2024-11-20 08:34:29.958546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.958582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.958596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:42.650 [2024-11-20 08:34:29.958607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:42.650 [2024-11-20 08:34:29.958619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.958643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:42.650 [2024-11-20 08:34:29.963359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.963394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.650 [2024-11-20 08:34:29.963409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.726 ms 00:21:42.650 [2024-11-20 08:34:29.963419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.963489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.963502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:42.650 [2024-11-20 08:34:29.963515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:42.650 [2024-11-20 08:34:29.963528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.963552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:42.650 [2024-11-20 08:34:29.963573] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:42.650 [2024-11-20 08:34:29.963618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:42.650 [2024-11-20 08:34:29.963637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:42.650 [2024-11-20 08:34:29.963736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:42.650 [2024-11-20 08:34:29.963750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:42.650 [2024-11-20 08:34:29.963771] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:42.650 [2024-11-20 08:34:29.963789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:42.650 [2024-11-20 08:34:29.963806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:42.650 [2024-11-20 08:34:29.963817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:42.650 [2024-11-20 08:34:29.963832] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:42.650 [2024-11-20 08:34:29.963842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:42.650 [2024-11-20 08:34:29.963861] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:42.650 [2024-11-20 08:34:29.963872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.963886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:42.650 [2024-11-20 08:34:29.963897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:21:42.650 [2024-11-20 08:34:29.963912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.964002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.650 [2024-11-20 08:34:29.964019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:42.650 [2024-11-20 08:34:29.964030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:42.650 [2024-11-20 08:34:29.964044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.650 [2024-11-20 08:34:29.964132] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:42.650 [2024-11-20 08:34:29.964149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:42.650 [2024-11-20 08:34:29.964160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:42.651 [2024-11-20 08:34:29.964199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:42.651 [2024-11-20 08:34:29.964241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.651 [2024-11-20 08:34:29.964265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:42.651 [2024-11-20 08:34:29.964279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:42.651 [2024-11-20 08:34:29.964288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.651 [2024-11-20 08:34:29.964302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:42.651 [2024-11-20 08:34:29.964312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:42.651 [2024-11-20 08:34:29.964326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:42.651 [2024-11-20 08:34:29.964350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:42.651 [2024-11-20 08:34:29.964394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:42.651 [2024-11-20 08:34:29.964436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:42.651 [2024-11-20 08:34:29.964469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:42.651 [2024-11-20 08:34:29.964506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:42.651 [2024-11-20 08:34:29.964540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.651 [2024-11-20 08:34:29.964565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:42.651 [2024-11-20 08:34:29.964580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:42.651 [2024-11-20 08:34:29.964589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.651 [2024-11-20 08:34:29.964604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:42.651 [2024-11-20 08:34:29.964613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:42.651 [2024-11-20 08:34:29.964633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:42.651 [2024-11-20 08:34:29.964657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:42.651 [2024-11-20 08:34:29.964667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964681] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:42.651 [2024-11-20 08:34:29.964692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:42.651 [2024-11-20 08:34:29.964711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.651 [2024-11-20 08:34:29.964737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:42.651 [2024-11-20 08:34:29.964747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:42.651 [2024-11-20 08:34:29.964760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:42.651 [2024-11-20 08:34:29.964770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:42.651 [2024-11-20 08:34:29.964784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:42.651 [2024-11-20 08:34:29.964794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:42.651 [2024-11-20 08:34:29.964809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:42.651 [2024-11-20 08:34:29.964822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.964842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:42.651 [2024-11-20 08:34:29.964853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:42.651 [2024-11-20 08:34:29.964870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:42.651 [2024-11-20 08:34:29.964881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:42.651 [2024-11-20 08:34:29.964896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:42.651 [2024-11-20 08:34:29.964907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:42.651 [2024-11-20 08:34:29.964921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:42.651 [2024-11-20 08:34:29.964932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:42.651 [2024-11-20 08:34:29.964947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:42.651 [2024-11-20 08:34:29.964958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.964973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.964984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.965008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.965020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:42.651 [2024-11-20 08:34:29.965035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:42.651 [2024-11-20 08:34:29.965046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.965067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:42.651 [2024-11-20 08:34:29.965078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:42.651 [2024-11-20 08:34:29.965093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:42.651 [2024-11-20 08:34:29.965104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:42.651 [2024-11-20 08:34:29.965120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:29.965131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:42.651 [2024-11-20 08:34:29.965146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:21:42.651 [2024-11-20 08:34:29.965156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.006908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:30.006952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.651 [2024-11-20 08:34:30.006972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.748 ms 00:21:42.651 [2024-11-20 08:34:30.006983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.007131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:30.007144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:42.651 [2024-11-20 08:34:30.007160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:42.651 [2024-11-20 08:34:30.007171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.053252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:30.053293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.651 [2024-11-20 08:34:30.053318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.123 ms 00:21:42.651 [2024-11-20 08:34:30.053329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.053439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:30.053452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.651 [2024-11-20 08:34:30.053468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:42.651 [2024-11-20 08:34:30.053479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.053922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.651 [2024-11-20 08:34:30.053941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.651 [2024-11-20 08:34:30.053963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:21:42.651 [2024-11-20 08:34:30.053973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.651 [2024-11-20 08:34:30.054111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.054125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.652 [2024-11-20 08:34:30.054150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:42.652 [2024-11-20 08:34:30.054160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.077771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.077811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.652 [2024-11-20 08:34:30.077830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.618 ms 00:21:42.652 [2024-11-20 08:34:30.077841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.097878] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:42.652 [2024-11-20 08:34:30.097916] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:42.652 [2024-11-20 08:34:30.097936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.097947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:42.652 [2024-11-20 08:34:30.097964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.005 ms 00:21:42.652 [2024-11-20 08:34:30.097974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.128295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.128335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:42.652 [2024-11-20 08:34:30.128356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.267 ms 00:21:42.652 [2024-11-20 08:34:30.128368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.146884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.146921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:42.652 [2024-11-20 08:34:30.146944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.453 ms 00:21:42.652 [2024-11-20 08:34:30.146955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.164804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.164837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:42.652 [2024-11-20 08:34:30.164872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.787 ms 00:21:42.652 [2024-11-20 08:34:30.164882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.652 [2024-11-20 08:34:30.165676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.652 [2024-11-20 08:34:30.165705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:42.652 [2024-11-20 08:34:30.165722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:21:42.652 [2024-11-20 08:34:30.165733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.261469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.261526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:42.911 [2024-11-20 08:34:30.261561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.858 ms 00:21:42.911 [2024-11-20 08:34:30.261572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.272401] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:42.911 [2024-11-20 08:34:30.288594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.288658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:42.911 [2024-11-20 08:34:30.288677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.962 ms 00:21:42.911 [2024-11-20 08:34:30.288690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.288808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.288825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:42.911 [2024-11-20 08:34:30.288837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:42.911 [2024-11-20 08:34:30.288850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.288901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.288914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:42.911 [2024-11-20 08:34:30.288925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:42.911 [2024-11-20 08:34:30.288938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.288965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.288978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.911 [2024-11-20 08:34:30.289006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:42.911 [2024-11-20 08:34:30.289023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.289058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:42.911 [2024-11-20 08:34:30.289076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.289086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:42.911 [2024-11-20 08:34:30.289103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:42.911 [2024-11-20 08:34:30.289114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.325479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.325522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.911 [2024-11-20 08:34:30.325539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.392 ms 00:21:42.911 [2024-11-20 08:34:30.325550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.325665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.911 [2024-11-20 08:34:30.325678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.911 [2024-11-20 08:34:30.325692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:42.911 [2024-11-20 08:34:30.325705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.911 [2024-11-20 08:34:30.326594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.911 [2024-11-20 08:34:30.330944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.277 ms, result 0 00:21:42.911 [2024-11-20 08:34:30.332433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.911 Some configs were skipped because the RPC state that can call them passed over. 00:21:42.911 08:34:30 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:43.171 [2024-11-20 08:34:30.608271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.171 [2024-11-20 08:34:30.608333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:43.171 [2024-11-20 08:34:30.608349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:21:43.171 [2024-11-20 08:34:30.608363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.171 [2024-11-20 08:34:30.608398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.743 ms, result 0 00:21:43.171 true 00:21:43.171 08:34:30 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:43.430 [2024-11-20 08:34:30.827688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.430 [2024-11-20 08:34:30.827738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:43.430 [2024-11-20 08:34:30.827756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:21:43.430 [2024-11-20 08:34:30.827766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.430 [2024-11-20 08:34:30.827807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.296 ms, result 0 00:21:43.430 true 00:21:43.430 08:34:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75765 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' -z 75765 ']' 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@961 -- # kill -0 75765 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@962 -- # uname 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75765 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:21:43.430 killing process with pid 75765 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75765' 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@976 -- # kill 75765 00:21:43.430 08:34:30 ftl.ftl_trim -- common/autotest_common.sh@981 -- # wait 75765 00:21:44.810 [2024-11-20 08:34:32.002452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.810 [2024-11-20 08:34:32.002521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:44.810 [2024-11-20 08:34:32.002537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:44.810 [2024-11-20 08:34:32.002550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.810 [2024-11-20 08:34:32.002573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:44.810 [2024-11-20 08:34:32.006911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.810 [2024-11-20 08:34:32.006952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:44.810 [2024-11-20 08:34:32.006969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:21:44.810 [2024-11-20 08:34:32.006980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.810 [2024-11-20 08:34:32.007230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.810 [2024-11-20 08:34:32.007243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:44.810 [2024-11-20 08:34:32.007256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:44.810 [2024-11-20 08:34:32.007266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.810 [2024-11-20 08:34:32.010613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.810 [2024-11-20 08:34:32.010654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:44.810 [2024-11-20 08:34:32.010672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:21:44.811 [2024-11-20 08:34:32.010682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.016310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.016348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:44.811 [2024-11-20 08:34:32.016362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.597 ms 00:21:44.811 [2024-11-20 08:34:32.016372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.030891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.030928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:44.811 [2024-11-20 08:34:32.030946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:21:44.811 [2024-11-20 08:34:32.030966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.041707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.041743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:44.811 [2024-11-20 08:34:32.041763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.677 ms 00:21:44.811 [2024-11-20 08:34:32.041774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.041913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.041927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:44.811 [2024-11-20 08:34:32.041940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:44.811 [2024-11-20 08:34:32.041950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.057709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.057746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:44.811 [2024-11-20 08:34:32.057761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.761 ms 00:21:44.811 [2024-11-20 08:34:32.057770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.072697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.072734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:44.811 [2024-11-20 08:34:32.072752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.898 ms 00:21:44.811 [2024-11-20 08:34:32.072761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.087305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.087343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:44.811 [2024-11-20 08:34:32.087361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:21:44.811 [2024-11-20 08:34:32.087371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.101788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.811 [2024-11-20 08:34:32.101824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:44.811 [2024-11-20 08:34:32.101856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.364 ms 00:21:44.811 [2024-11-20 08:34:32.101865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.811 [2024-11-20 08:34:32.101915] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:44.811 [2024-11-20 08:34:32.101931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.101947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.101958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.101972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.101983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:44.811 [2024-11-20 08:34:32.102673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.102997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:44.812 [2024-11-20 08:34:32.103186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:44.812 [2024-11-20 08:34:32.103205] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:44.812 [2024-11-20 08:34:32.103226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:44.812 [2024-11-20 08:34:32.103242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:44.812 [2024-11-20 08:34:32.103252] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:44.812 [2024-11-20 08:34:32.103264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:44.812 [2024-11-20 08:34:32.103274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:44.812 [2024-11-20 08:34:32.103287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:44.812 [2024-11-20 08:34:32.103297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:44.812 [2024-11-20 08:34:32.103308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:44.812 [2024-11-20 08:34:32.103317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:44.812 [2024-11-20 08:34:32.103329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.812 [2024-11-20 08:34:32.103339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:44.812 [2024-11-20 08:34:32.103353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:21:44.812 [2024-11-20 08:34:32.103363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.123441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.812 [2024-11-20 08:34:32.123475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:44.812 [2024-11-20 08:34:32.123508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:21:44.812 [2024-11-20 08:34:32.123519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.124085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.812 [2024-11-20 08:34:32.124103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:44.812 [2024-11-20 08:34:32.124117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:21:44.812 [2024-11-20 08:34:32.124130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.191883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:44.812 [2024-11-20 08:34:32.191923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:44.812 [2024-11-20 08:34:32.191939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:44.812 [2024-11-20 08:34:32.191950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.192041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:44.812 [2024-11-20 08:34:32.192054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:44.812 [2024-11-20 08:34:32.192068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:44.812 [2024-11-20 08:34:32.192081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.192131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:44.812 [2024-11-20 08:34:32.192144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:44.812 [2024-11-20 08:34:32.192160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:44.812 [2024-11-20 08:34:32.192170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.192193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:44.812 [2024-11-20 08:34:32.192203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:44.812 [2024-11-20 08:34:32.192216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:44.812 [2024-11-20 08:34:32.192226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.812 [2024-11-20 08:34:32.314702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:44.812 [2024-11-20 08:34:32.314761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:44.812 [2024-11-20 08:34:32.314779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:44.812 [2024-11-20 08:34:32.314805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.414508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.414556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.072 [2024-11-20 08:34:32.414573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.414587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.414672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.414684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.072 [2024-11-20 08:34:32.414701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.414711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.414759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.414770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.072 [2024-11-20 08:34:32.414783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.414793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.414922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.414935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.072 [2024-11-20 08:34:32.414948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.414959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.415011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.415023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:45.072 [2024-11-20 08:34:32.415037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.415046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.415086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.415100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.072 [2024-11-20 08:34:32.415115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.415125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.415171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.072 [2024-11-20 08:34:32.415183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.072 [2024-11-20 08:34:32.415196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.072 [2024-11-20 08:34:32.415206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.072 [2024-11-20 08:34:32.415350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.532 ms, result 0 00:21:46.010 08:34:33 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:46.010 [2024-11-20 08:34:33.503741] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:46.010 [2024-11-20 08:34:33.503864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75830 ] 00:21:46.270 [2024-11-20 08:34:33.682532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.270 [2024-11-20 08:34:33.793670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.840 [2024-11-20 08:34:34.139124] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.840 [2024-11-20 08:34:34.139186] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.840 [2024-11-20 08:34:34.300570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.300619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:46.840 [2024-11-20 08:34:34.300635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.840 [2024-11-20 08:34:34.300646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.303696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.303738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.840 [2024-11-20 08:34:34.303751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:21:46.840 [2024-11-20 08:34:34.303761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.303854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:46.840 [2024-11-20 08:34:34.304861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:46.840 [2024-11-20 08:34:34.304896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.304907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.840 [2024-11-20 08:34:34.304918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:21:46.840 [2024-11-20 08:34:34.304927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.306541] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:46.840 [2024-11-20 08:34:34.325929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.325968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:46.840 [2024-11-20 08:34:34.325981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.420 ms 00:21:46.840 [2024-11-20 08:34:34.326005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.326114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.326130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:46.840 [2024-11-20 08:34:34.326150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:46.840 [2024-11-20 08:34:34.326160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.332895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.333071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.840 [2024-11-20 08:34:34.333091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:21:46.840 [2024-11-20 08:34:34.333102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.840 [2024-11-20 08:34:34.333205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.840 [2024-11-20 08:34:34.333218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.840 [2024-11-20 08:34:34.333230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:46.841 [2024-11-20 08:34:34.333240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.333267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.841 [2024-11-20 08:34:34.333281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:46.841 [2024-11-20 08:34:34.333292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:46.841 [2024-11-20 08:34:34.333301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.333326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:46.841 [2024-11-20 08:34:34.338065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.841 [2024-11-20 08:34:34.338094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.841 [2024-11-20 08:34:34.338106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:21:46.841 [2024-11-20 08:34:34.338116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.338190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.841 [2024-11-20 08:34:34.338203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:46.841 [2024-11-20 08:34:34.338214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:46.841 [2024-11-20 08:34:34.338223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.338243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:46.841 [2024-11-20 08:34:34.338268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:46.841 [2024-11-20 08:34:34.338302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:46.841 [2024-11-20 08:34:34.338320] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:46.841 [2024-11-20 08:34:34.338407] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:46.841 [2024-11-20 08:34:34.338421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:46.841 [2024-11-20 08:34:34.338434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:46.841 [2024-11-20 08:34:34.338447] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338462] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338474] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:46.841 [2024-11-20 08:34:34.338484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:46.841 [2024-11-20 08:34:34.338493] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:46.841 [2024-11-20 08:34:34.338503] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:46.841 [2024-11-20 08:34:34.338513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.841 [2024-11-20 08:34:34.338523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:46.841 [2024-11-20 08:34:34.338534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:46.841 [2024-11-20 08:34:34.338543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.338618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.841 [2024-11-20 08:34:34.338629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:46.841 [2024-11-20 08:34:34.338643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:46.841 [2024-11-20 08:34:34.338653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.841 [2024-11-20 08:34:34.338743] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:46.841 [2024-11-20 08:34:34.338755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:46.841 [2024-11-20 08:34:34.338766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:46.841 [2024-11-20 08:34:34.338796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:46.841 [2024-11-20 08:34:34.338825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.841 [2024-11-20 08:34:34.338843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:46.841 [2024-11-20 08:34:34.338852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:46.841 [2024-11-20 08:34:34.338861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.841 [2024-11-20 08:34:34.338881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:46.841 [2024-11-20 08:34:34.338891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:46.841 [2024-11-20 08:34:34.338900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:46.841 [2024-11-20 08:34:34.338918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:46.841 [2024-11-20 08:34:34.338946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.841 [2024-11-20 08:34:34.338964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:46.841 [2024-11-20 08:34:34.338973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:46.841 [2024-11-20 08:34:34.338982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.841 [2024-11-20 08:34:34.339014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:46.841 [2024-11-20 08:34:34.339024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.841 [2024-11-20 08:34:34.339041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:46.841 [2024-11-20 08:34:34.339051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.841 [2024-11-20 08:34:34.339070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:46.841 [2024-11-20 08:34:34.339079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.841 [2024-11-20 08:34:34.339096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:46.841 [2024-11-20 08:34:34.339106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:46.841 [2024-11-20 08:34:34.339115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.841 [2024-11-20 08:34:34.339124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:46.841 [2024-11-20 08:34:34.339133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:46.841 [2024-11-20 08:34:34.339142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:46.841 [2024-11-20 08:34:34.339160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:46.841 [2024-11-20 08:34:34.339171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:46.841 [2024-11-20 08:34:34.339189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:46.841 [2024-11-20 08:34:34.339199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.841 [2024-11-20 08:34:34.339222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.841 [2024-11-20 08:34:34.339232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:46.841 [2024-11-20 08:34:34.339241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:46.841 [2024-11-20 08:34:34.339250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:46.841 [2024-11-20 08:34:34.339259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:46.841 [2024-11-20 08:34:34.339268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:46.841 [2024-11-20 08:34:34.339277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:46.841 [2024-11-20 08:34:34.339287] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:46.841 [2024-11-20 08:34:34.339298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.841 [2024-11-20 08:34:34.339309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:46.841 [2024-11-20 08:34:34.339319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:46.841 [2024-11-20 08:34:34.339329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:46.841 [2024-11-20 08:34:34.339339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:46.841 [2024-11-20 08:34:34.339349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:46.841 [2024-11-20 08:34:34.339359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:46.841 [2024-11-20 08:34:34.339369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:46.841 [2024-11-20 08:34:34.339379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:46.841 [2024-11-20 08:34:34.339388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:46.842 [2024-11-20 08:34:34.339398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:46.842 [2024-11-20 08:34:34.339447] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:46.842 [2024-11-20 08:34:34.339458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:46.842 [2024-11-20 08:34:34.339479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:46.842 [2024-11-20 08:34:34.339489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:46.842 [2024-11-20 08:34:34.339503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:46.842 [2024-11-20 08:34:34.339514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.842 [2024-11-20 08:34:34.339524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:46.842 [2024-11-20 08:34:34.339539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:21:46.842 [2024-11-20 08:34:34.339548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.842 [2024-11-20 08:34:34.380042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.842 [2024-11-20 08:34:34.380227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.842 [2024-11-20 08:34:34.380248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.506 ms 00:21:46.842 [2024-11-20 08:34:34.380259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.842 [2024-11-20 08:34:34.380385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.842 [2024-11-20 08:34:34.380403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.842 [2024-11-20 08:34:34.380414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:46.842 [2024-11-20 08:34:34.380424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.438693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.438842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.102 [2024-11-20 08:34:34.438863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.341 ms 00:21:47.102 [2024-11-20 08:34:34.438880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.438976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.439006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.102 [2024-11-20 08:34:34.439018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:47.102 [2024-11-20 08:34:34.439028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.439466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.439479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.102 [2024-11-20 08:34:34.439490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:21:47.102 [2024-11-20 08:34:34.439506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.439620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.439634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.102 [2024-11-20 08:34:34.439644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:47.102 [2024-11-20 08:34:34.439654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.460237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.460281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.102 [2024-11-20 08:34:34.460294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.595 ms 00:21:47.102 [2024-11-20 08:34:34.460305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.478723] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:47.102 [2024-11-20 08:34:34.478761] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:47.102 [2024-11-20 08:34:34.478776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.478787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:47.102 [2024-11-20 08:34:34.478798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:21:47.102 [2024-11-20 08:34:34.478808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.508427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.508474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:47.102 [2024-11-20 08:34:34.508487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.590 ms 00:21:47.102 [2024-11-20 08:34:34.508513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.526759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.526795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:47.102 [2024-11-20 08:34:34.526808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.195 ms 00:21:47.102 [2024-11-20 08:34:34.526818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.545223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.545257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:47.102 [2024-11-20 08:34:34.545270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.360 ms 00:21:47.102 [2024-11-20 08:34:34.545280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.545963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.545999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:47.102 [2024-11-20 08:34:34.546012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:21:47.102 [2024-11-20 08:34:34.546023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.629991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.102 [2024-11-20 08:34:34.630054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:47.102 [2024-11-20 08:34:34.630070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.077 ms 00:21:47.102 [2024-11-20 08:34:34.630097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.102 [2024-11-20 08:34:34.640737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:47.103 [2024-11-20 08:34:34.656653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.103 [2024-11-20 08:34:34.656701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:47.103 [2024-11-20 08:34:34.656717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.494 ms 00:21:47.103 [2024-11-20 08:34:34.656743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.103 [2024-11-20 08:34:34.656869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.103 [2024-11-20 08:34:34.656883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:47.103 [2024-11-20 08:34:34.656894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:47.103 [2024-11-20 08:34:34.656904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.103 [2024-11-20 08:34:34.656956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.103 [2024-11-20 08:34:34.656968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:47.103 [2024-11-20 08:34:34.656978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:47.103 [2024-11-20 08:34:34.656988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.103 [2024-11-20 08:34:34.657039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.103 [2024-11-20 08:34:34.657055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:47.103 [2024-11-20 08:34:34.657066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:47.103 [2024-11-20 08:34:34.657076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.103 [2024-11-20 08:34:34.657108] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:47.103 [2024-11-20 08:34:34.657120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.103 [2024-11-20 08:34:34.657130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:47.103 [2024-11-20 08:34:34.657140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:47.103 [2024-11-20 08:34:34.657150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.362 [2024-11-20 08:34:34.694079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.362 [2024-11-20 08:34:34.694249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:47.362 [2024-11-20 08:34:34.694270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.964 ms 00:21:47.362 [2024-11-20 08:34:34.694281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.362 [2024-11-20 08:34:34.694392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.362 [2024-11-20 08:34:34.694406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:47.362 [2024-11-20 08:34:34.694417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:47.362 [2024-11-20 08:34:34.694427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.362 [2024-11-20 08:34:34.695306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.363 [2024-11-20 08:34:34.699350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.081 ms, result 0 00:21:47.363 [2024-11-20 08:34:34.700220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:47.363 [2024-11-20 08:34:34.718661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.301  [2024-11-20T08:34:36.800Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T08:34:38.180Z] Copying: 51/256 [MB] (24 MBps) [2024-11-20T08:34:39.120Z] Copying: 77/256 [MB] (25 MBps) [2024-11-20T08:34:40.061Z] Copying: 102/256 [MB] (24 MBps) [2024-11-20T08:34:41.038Z] Copying: 126/256 [MB] (24 MBps) [2024-11-20T08:34:41.977Z] Copying: 151/256 [MB] (24 MBps) [2024-11-20T08:34:42.913Z] Copying: 176/256 [MB] (25 MBps) [2024-11-20T08:34:43.849Z] Copying: 201/256 [MB] (24 MBps) [2024-11-20T08:34:44.786Z] Copying: 225/256 [MB] (24 MBps) [2024-11-20T08:34:45.046Z] Copying: 249/256 [MB] (23 MBps) [2024-11-20T08:34:45.614Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 08:34:45.479924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.053 [2024-11-20 08:34:45.503008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.503084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.054 [2024-11-20 08:34:45.503104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:58.054 [2024-11-20 08:34:45.503131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.503165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:58.054 [2024-11-20 08:34:45.507849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.507886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.054 [2024-11-20 08:34:45.507901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.672 ms 00:21:58.054 [2024-11-20 08:34:45.507912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.508201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.508217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.054 [2024-11-20 08:34:45.508230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:21:58.054 [2024-11-20 08:34:45.508241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.511156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.511186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.054 [2024-11-20 08:34:45.511200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:21:58.054 [2024-11-20 08:34:45.511211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.516843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.517112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.054 [2024-11-20 08:34:45.517141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.618 ms 00:21:58.054 [2024-11-20 08:34:45.517152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.558057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.558140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.054 [2024-11-20 08:34:45.558160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.860 ms 00:21:58.054 [2024-11-20 08:34:45.558172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.581887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.581976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.054 [2024-11-20 08:34:45.582017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.636 ms 00:21:58.054 [2024-11-20 08:34:45.582034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.054 [2024-11-20 08:34:45.582278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.054 [2024-11-20 08:34:45.582295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.054 [2024-11-20 08:34:45.582309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:58.054 [2024-11-20 08:34:45.582320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.314 [2024-11-20 08:34:45.622798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.314 [2024-11-20 08:34:45.622877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.314 [2024-11-20 08:34:45.622897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.502 ms 00:21:58.314 [2024-11-20 08:34:45.622909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.314 [2024-11-20 08:34:45.663089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.314 [2024-11-20 08:34:45.663167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.314 [2024-11-20 08:34:45.663186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.119 ms 00:21:58.314 [2024-11-20 08:34:45.663197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.314 [2024-11-20 08:34:45.702947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.314 [2024-11-20 08:34:45.703034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.314 [2024-11-20 08:34:45.703054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.705 ms 00:21:58.314 [2024-11-20 08:34:45.703065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.314 [2024-11-20 08:34:45.742373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.314 [2024-11-20 08:34:45.742683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.314 [2024-11-20 08:34:45.742713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.217 ms 00:21:58.314 [2024-11-20 08:34:45.742724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.314 [2024-11-20 08:34:45.742860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.314 [2024-11-20 08:34:45.742887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.742973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.314 [2024-11-20 08:34:45.743427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.743968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.315 [2024-11-20 08:34:45.744074] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.315 [2024-11-20 08:34:45.744086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ff582c-205c-44ba-a49f-16afcb66c63b 00:21:58.315 [2024-11-20 08:34:45.744097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:58.315 [2024-11-20 08:34:45.744108] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:58.315 [2024-11-20 08:34:45.744119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:58.315 [2024-11-20 08:34:45.744131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:58.315 [2024-11-20 08:34:45.744141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.315 [2024-11-20 08:34:45.744153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.315 [2024-11-20 08:34:45.744164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.315 [2024-11-20 08:34:45.744174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.315 [2024-11-20 08:34:45.744183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.315 [2024-11-20 08:34:45.744194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.315 [2024-11-20 08:34:45.744212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.315 [2024-11-20 08:34:45.744225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:21:58.315 [2024-11-20 08:34:45.744236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.766241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.315 [2024-11-20 08:34:45.766652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.315 [2024-11-20 08:34:45.766823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.012 ms 00:21:58.315 [2024-11-20 08:34:45.766864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.767616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.315 [2024-11-20 08:34:45.767725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.315 [2024-11-20 08:34:45.767826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:21:58.315 [2024-11-20 08:34:45.767863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.827359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.315 [2024-11-20 08:34:45.827667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.315 [2024-11-20 08:34:45.827784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.315 [2024-11-20 08:34:45.827825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.828062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.315 [2024-11-20 08:34:45.828214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.315 [2024-11-20 08:34:45.828269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.315 [2024-11-20 08:34:45.828299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.828393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.315 [2024-11-20 08:34:45.828429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.315 [2024-11-20 08:34:45.828459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.315 [2024-11-20 08:34:45.828491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.315 [2024-11-20 08:34:45.828644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.315 [2024-11-20 08:34:45.828693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.315 [2024-11-20 08:34:45.828725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.315 [2024-11-20 08:34:45.828754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.574 [2024-11-20 08:34:45.966747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.574 [2024-11-20 08:34:45.967089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.574 [2024-11-20 08:34:45.967230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.574 [2024-11-20 08:34:45.967269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.574 [2024-11-20 08:34:46.079320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.574 [2024-11-20 08:34:46.079637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.574 [2024-11-20 08:34:46.079755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.574 [2024-11-20 08:34:46.079792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.079958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.080083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.575 [2024-11-20 08:34:46.080124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.080155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.080318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.080355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.575 [2024-11-20 08:34:46.080440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.080475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.080656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.080819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.575 [2024-11-20 08:34:46.080904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.080920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.080983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.081011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:58.575 [2024-11-20 08:34:46.081022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.081039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.081091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.081104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.575 [2024-11-20 08:34:46.081115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.081127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.081183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.575 [2024-11-20 08:34:46.081196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.575 [2024-11-20 08:34:46.081211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.575 [2024-11-20 08:34:46.081222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.575 [2024-11-20 08:34:46.081395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 579.343 ms, result 0 00:21:59.952 00:21:59.952 00:21:59.952 08:34:47 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:00.210 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:00.210 08:34:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:00.469 08:34:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75765 00:22:00.469 08:34:47 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' -z 75765 ']' 00:22:00.469 08:34:47 ftl.ftl_trim -- common/autotest_common.sh@961 -- # kill -0 75765 00:22:00.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (75765) - No such process 00:22:00.469 08:34:47 ftl.ftl_trim -- common/autotest_common.sh@984 -- # echo 'Process with pid 75765 is not found' 00:22:00.469 Process with pid 75765 is not found 00:22:00.469 ************************************ 00:22:00.469 END TEST ftl_trim 00:22:00.469 ************************************ 00:22:00.469 00:22:00.469 real 1m12.735s 00:22:00.469 user 1m39.119s 00:22:00.469 sys 0m6.762s 00:22:00.469 08:34:47 ftl.ftl_trim -- common/autotest_common.sh@1133 -- # xtrace_disable 00:22:00.469 08:34:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 08:34:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:00.469 08:34:47 ftl -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:22:00.469 08:34:47 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:22:00.469 08:34:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:00.469 ************************************ 00:22:00.469 START TEST ftl_restore 00:22:00.469 ************************************ 00:22:00.469 08:34:47 ftl.ftl_restore -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:00.469 * Looking for test storage... 00:22:00.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.469 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:22:00.469 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1638 -- # lcov --version 00:22:00.469 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.729 08:34:48 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:22:00.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.729 --rc genhtml_branch_coverage=1 00:22:00.729 --rc genhtml_function_coverage=1 00:22:00.729 --rc genhtml_legend=1 00:22:00.729 --rc geninfo_all_blocks=1 00:22:00.729 --rc geninfo_unexecuted_blocks=1 00:22:00.729 00:22:00.729 ' 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:22:00.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.729 --rc genhtml_branch_coverage=1 00:22:00.729 --rc genhtml_function_coverage=1 00:22:00.729 --rc genhtml_legend=1 00:22:00.729 --rc geninfo_all_blocks=1 00:22:00.729 --rc geninfo_unexecuted_blocks=1 00:22:00.729 00:22:00.729 ' 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:22:00.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.729 --rc genhtml_branch_coverage=1 00:22:00.729 --rc genhtml_function_coverage=1 00:22:00.729 --rc genhtml_legend=1 00:22:00.729 --rc geninfo_all_blocks=1 00:22:00.729 --rc geninfo_unexecuted_blocks=1 00:22:00.729 00:22:00.729 ' 00:22:00.729 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:22:00.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.729 --rc genhtml_branch_coverage=1 00:22:00.729 --rc genhtml_function_coverage=1 00:22:00.729 --rc genhtml_legend=1 00:22:00.729 --rc geninfo_all_blocks=1 00:22:00.729 --rc geninfo_unexecuted_blocks=1 00:22:00.729 00:22:00.729 ' 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:00.729 08:34:48 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.ikKVD0zUw4 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76051 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76051 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@838 -- # '[' -z 76051 ']' 00:22:00.730 08:34:48 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@843 -- # local max_retries=100 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@847 -- # xtrace_disable 00:22:00.730 08:34:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:00.730 [2024-11-20 08:34:48.247742] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:22:00.730 [2024-11-20 08:34:48.248107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:22:00.989 [2024-11-20 08:34:48.431347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.248 [2024-11-20 08:34:48.575840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.191 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:22:02.191 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@871 -- # return 0 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:02.191 08:34:49 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:02.450 08:34:49 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:02.450 08:34:49 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:02.450 08:34:49 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:02.450 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@1370 -- # local bdev_name=nvme0n1 00:22:02.450 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@1371 -- # local bdev_info 00:22:02.450 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@1372 -- # local bs 00:22:02.450 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@1373 -- # local nb 00:22:02.450 08:34:49 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:22:02.710 { 00:22:02.710 "name": "nvme0n1", 00:22:02.710 "aliases": [ 00:22:02.710 "d59fcee2-a08a-4c72-98ce-8b4e89c56272" 00:22:02.710 ], 00:22:02.710 "product_name": "NVMe disk", 00:22:02.710 "block_size": 4096, 00:22:02.710 "num_blocks": 1310720, 00:22:02.710 "uuid": "d59fcee2-a08a-4c72-98ce-8b4e89c56272", 00:22:02.710 "numa_id": -1, 00:22:02.710 "assigned_rate_limits": { 00:22:02.710 "rw_ios_per_sec": 0, 00:22:02.710 "rw_mbytes_per_sec": 0, 00:22:02.710 "r_mbytes_per_sec": 0, 00:22:02.710 "w_mbytes_per_sec": 0 00:22:02.710 }, 00:22:02.710 "claimed": true, 00:22:02.710 "claim_type": "read_many_write_one", 00:22:02.710 "zoned": false, 00:22:02.710 "supported_io_types": { 00:22:02.710 "read": true, 00:22:02.710 "write": true, 00:22:02.710 "unmap": true, 00:22:02.710 "flush": true, 00:22:02.710 "reset": true, 00:22:02.710 "nvme_admin": true, 00:22:02.710 "nvme_io": true, 00:22:02.710 "nvme_io_md": false, 00:22:02.710 "write_zeroes": true, 00:22:02.710 "zcopy": false, 00:22:02.710 "get_zone_info": false, 00:22:02.710 "zone_management": false, 00:22:02.710 "zone_append": false, 00:22:02.710 "compare": true, 00:22:02.710 "compare_and_write": false, 00:22:02.710 "abort": true, 00:22:02.710 "seek_hole": false, 00:22:02.710 "seek_data": false, 00:22:02.710 "copy": true, 00:22:02.710 "nvme_iov_md": false 00:22:02.710 }, 00:22:02.710 "driver_specific": { 00:22:02.710 "nvme": [ 00:22:02.710 { 00:22:02.710 "pci_address": "0000:00:11.0", 00:22:02.710 "trid": { 00:22:02.710 "trtype": "PCIe", 00:22:02.710 "traddr": "0000:00:11.0" 00:22:02.710 }, 00:22:02.710 "ctrlr_data": { 00:22:02.710 "cntlid": 0, 00:22:02.710 "vendor_id": "0x1b36", 00:22:02.710 "model_number": "QEMU NVMe Ctrl", 00:22:02.710 "serial_number": "12341", 00:22:02.710 "firmware_revision": "8.0.0", 00:22:02.710 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:02.710 "oacs": { 00:22:02.710 "security": 0, 00:22:02.710 "format": 1, 00:22:02.710 "firmware": 0, 00:22:02.710 "ns_manage": 1 00:22:02.710 }, 00:22:02.710 "multi_ctrlr": false, 00:22:02.710 "ana_reporting": false 00:22:02.710 }, 00:22:02.710 "vs": { 00:22:02.710 "nvme_version": "1.4" 00:22:02.710 }, 00:22:02.710 "ns_data": { 00:22:02.710 "id": 1, 00:22:02.710 "can_share": false 00:22:02.710 } 00:22:02.710 } 00:22:02.710 ], 00:22:02.710 "mp_policy": "active_passive" 00:22:02.710 } 00:22:02.710 } 00:22:02.710 ]' 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # bs=4096 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # nb=1310720 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:22:02.710 08:34:50 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # echo 5120 00:22:02.710 08:34:50 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:02.710 08:34:50 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:02.710 08:34:50 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:02.969 08:34:50 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:02.969 08:34:50 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:02.969 08:34:50 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a34b27d6-bbb0-4c5b-bd15-2c519592befd 00:22:02.969 08:34:50 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:02.969 08:34:50 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a34b27d6-bbb0-4c5b-bd15-2c519592befd 00:22:03.227 08:34:50 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:03.485 08:34:50 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=66275fd3-28a7-43fa-b883-443eba234eed 00:22:03.485 08:34:50 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 66275fd3-28a7-43fa-b883-443eba234eed 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=21505400-21c5-4292-92c2-9000c3102639 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 21505400-21c5-4292-92c2-9000c3102639 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=21505400-21c5-4292-92c2-9000c3102639 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:03.744 08:34:51 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 21505400-21c5-4292-92c2-9000c3102639 00:22:03.744 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1370 -- # local bdev_name=21505400-21c5-4292-92c2-9000c3102639 00:22:03.744 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1371 -- # local bdev_info 00:22:03.744 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1372 -- # local bs 00:22:03.744 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1373 -- # local nb 00:22:03.744 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21505400-21c5-4292-92c2-9000c3102639 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:22:04.003 { 00:22:04.003 "name": "21505400-21c5-4292-92c2-9000c3102639", 00:22:04.003 "aliases": [ 00:22:04.003 "lvs/nvme0n1p0" 00:22:04.003 ], 00:22:04.003 "product_name": "Logical Volume", 00:22:04.003 "block_size": 4096, 00:22:04.003 "num_blocks": 26476544, 00:22:04.003 "uuid": "21505400-21c5-4292-92c2-9000c3102639", 00:22:04.003 "assigned_rate_limits": { 00:22:04.003 "rw_ios_per_sec": 0, 00:22:04.003 "rw_mbytes_per_sec": 0, 00:22:04.003 "r_mbytes_per_sec": 0, 00:22:04.003 "w_mbytes_per_sec": 0 00:22:04.003 }, 00:22:04.003 "claimed": false, 00:22:04.003 "zoned": false, 00:22:04.003 "supported_io_types": { 00:22:04.003 "read": true, 00:22:04.003 "write": true, 00:22:04.003 "unmap": true, 00:22:04.003 "flush": false, 00:22:04.003 "reset": true, 00:22:04.003 "nvme_admin": false, 00:22:04.003 "nvme_io": false, 00:22:04.003 "nvme_io_md": false, 00:22:04.003 "write_zeroes": true, 00:22:04.003 "zcopy": false, 00:22:04.003 "get_zone_info": false, 00:22:04.003 "zone_management": false, 00:22:04.003 "zone_append": false, 00:22:04.003 "compare": false, 00:22:04.003 "compare_and_write": false, 00:22:04.003 "abort": false, 00:22:04.003 "seek_hole": true, 00:22:04.003 "seek_data": true, 00:22:04.003 "copy": false, 00:22:04.003 "nvme_iov_md": false 00:22:04.003 }, 00:22:04.003 "driver_specific": { 00:22:04.003 "lvol": { 00:22:04.003 "lvol_store_uuid": "66275fd3-28a7-43fa-b883-443eba234eed", 00:22:04.003 "base_bdev": "nvme0n1", 00:22:04.003 "thin_provision": true, 00:22:04.003 "num_allocated_clusters": 0, 00:22:04.003 "snapshot": false, 00:22:04.003 "clone": false, 00:22:04.003 "esnap_clone": false 00:22:04.003 } 00:22:04.003 } 00:22:04.003 } 00:22:04.003 ]' 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # bs=4096 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # nb=26476544 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:22:04.003 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # echo 103424 00:22:04.003 08:34:51 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:04.003 08:34:51 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:04.003 08:34:51 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:04.262 08:34:51 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:04.262 08:34:51 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:04.262 08:34:51 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 21505400-21c5-4292-92c2-9000c3102639 00:22:04.262 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1370 -- # local bdev_name=21505400-21c5-4292-92c2-9000c3102639 00:22:04.262 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1371 -- # local bdev_info 00:22:04.262 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1372 -- # local bs 00:22:04.262 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1373 -- # local nb 00:22:04.262 08:34:51 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21505400-21c5-4292-92c2-9000c3102639 00:22:04.521 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:22:04.521 { 00:22:04.521 "name": "21505400-21c5-4292-92c2-9000c3102639", 00:22:04.521 "aliases": [ 00:22:04.521 "lvs/nvme0n1p0" 00:22:04.521 ], 00:22:04.521 "product_name": "Logical Volume", 00:22:04.521 "block_size": 4096, 00:22:04.521 "num_blocks": 26476544, 00:22:04.521 "uuid": "21505400-21c5-4292-92c2-9000c3102639", 00:22:04.521 "assigned_rate_limits": { 00:22:04.521 "rw_ios_per_sec": 0, 00:22:04.521 "rw_mbytes_per_sec": 0, 00:22:04.521 "r_mbytes_per_sec": 0, 00:22:04.521 "w_mbytes_per_sec": 0 00:22:04.521 }, 00:22:04.521 "claimed": false, 00:22:04.521 "zoned": false, 00:22:04.521 "supported_io_types": { 00:22:04.521 "read": true, 00:22:04.521 "write": true, 00:22:04.521 "unmap": true, 00:22:04.521 "flush": false, 00:22:04.521 "reset": true, 00:22:04.521 "nvme_admin": false, 00:22:04.521 "nvme_io": false, 00:22:04.521 "nvme_io_md": false, 00:22:04.521 "write_zeroes": true, 00:22:04.521 "zcopy": false, 00:22:04.521 "get_zone_info": false, 00:22:04.521 "zone_management": false, 00:22:04.521 "zone_append": false, 00:22:04.521 "compare": false, 00:22:04.521 "compare_and_write": false, 00:22:04.521 "abort": false, 00:22:04.521 "seek_hole": true, 00:22:04.521 "seek_data": true, 00:22:04.521 "copy": false, 00:22:04.521 "nvme_iov_md": false 00:22:04.521 }, 00:22:04.521 "driver_specific": { 00:22:04.521 "lvol": { 00:22:04.521 "lvol_store_uuid": "66275fd3-28a7-43fa-b883-443eba234eed", 00:22:04.521 "base_bdev": "nvme0n1", 00:22:04.521 "thin_provision": true, 00:22:04.521 "num_allocated_clusters": 0, 00:22:04.521 "snapshot": false, 00:22:04.521 "clone": false, 00:22:04.521 "esnap_clone": false 00:22:04.521 } 00:22:04.521 } 00:22:04.521 } 00:22:04.521 ]' 00:22:04.521 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:22:04.521 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # bs=4096 00:22:04.521 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # nb=26476544 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # echo 103424 00:22:04.780 08:34:52 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:04.780 08:34:52 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:04.780 08:34:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:04.780 08:34:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 21505400-21c5-4292-92c2-9000c3102639 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1370 -- # local bdev_name=21505400-21c5-4292-92c2-9000c3102639 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1371 -- # local bdev_info 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1372 -- # local bs 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1373 -- # local nb 00:22:04.780 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21505400-21c5-4292-92c2-9000c3102639 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:22:05.041 { 00:22:05.041 "name": "21505400-21c5-4292-92c2-9000c3102639", 00:22:05.041 "aliases": [ 00:22:05.041 "lvs/nvme0n1p0" 00:22:05.041 ], 00:22:05.041 "product_name": "Logical Volume", 00:22:05.041 "block_size": 4096, 00:22:05.041 "num_blocks": 26476544, 00:22:05.041 "uuid": "21505400-21c5-4292-92c2-9000c3102639", 00:22:05.041 "assigned_rate_limits": { 00:22:05.041 "rw_ios_per_sec": 0, 00:22:05.041 "rw_mbytes_per_sec": 0, 00:22:05.041 "r_mbytes_per_sec": 0, 00:22:05.041 "w_mbytes_per_sec": 0 00:22:05.041 }, 00:22:05.041 "claimed": false, 00:22:05.041 "zoned": false, 00:22:05.041 "supported_io_types": { 00:22:05.041 "read": true, 00:22:05.041 "write": true, 00:22:05.041 "unmap": true, 00:22:05.041 "flush": false, 00:22:05.041 "reset": true, 00:22:05.041 "nvme_admin": false, 00:22:05.041 "nvme_io": false, 00:22:05.041 "nvme_io_md": false, 00:22:05.041 "write_zeroes": true, 00:22:05.041 "zcopy": false, 00:22:05.041 "get_zone_info": false, 00:22:05.041 "zone_management": false, 00:22:05.041 "zone_append": false, 00:22:05.041 "compare": false, 00:22:05.041 "compare_and_write": false, 00:22:05.041 "abort": false, 00:22:05.041 "seek_hole": true, 00:22:05.041 "seek_data": true, 00:22:05.041 "copy": false, 00:22:05.041 "nvme_iov_md": false 00:22:05.041 }, 00:22:05.041 "driver_specific": { 00:22:05.041 "lvol": { 00:22:05.041 "lvol_store_uuid": "66275fd3-28a7-43fa-b883-443eba234eed", 00:22:05.041 "base_bdev": "nvme0n1", 00:22:05.041 "thin_provision": true, 00:22:05.041 "num_allocated_clusters": 0, 00:22:05.041 "snapshot": false, 00:22:05.041 "clone": false, 00:22:05.041 "esnap_clone": false 00:22:05.041 } 00:22:05.041 } 00:22:05.041 } 00:22:05.041 ]' 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # bs=4096 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # nb=26476544 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:22:05.041 08:34:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # echo 103424 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 21505400-21c5-4292-92c2-9000c3102639 --l2p_dram_limit 10' 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:05.041 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:05.041 08:34:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 21505400-21c5-4292-92c2-9000c3102639 --l2p_dram_limit 10 -c nvc0n1p0 00:22:05.302 [2024-11-20 08:34:52.792553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.792645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:05.302 [2024-11-20 08:34:52.792674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:05.302 [2024-11-20 08:34:52.792689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.792888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.792903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.302 [2024-11-20 08:34:52.792918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:22:05.302 [2024-11-20 08:34:52.792929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.792965] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:05.302 [2024-11-20 08:34:52.794336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:05.302 [2024-11-20 08:34:52.794380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.794392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.302 [2024-11-20 08:34:52.794407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:22:05.302 [2024-11-20 08:34:52.794417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.794526] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 232784ec-97dd-4428-aed9-3dfdc45d7381 00:22:05.302 [2024-11-20 08:34:52.797054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.797095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:05.302 [2024-11-20 08:34:52.797109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:05.302 [2024-11-20 08:34:52.797127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.810826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.810874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.302 [2024-11-20 08:34:52.810895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.641 ms 00:22:05.302 [2024-11-20 08:34:52.810909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.811056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.811074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.302 [2024-11-20 08:34:52.811087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:05.302 [2024-11-20 08:34:52.811106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.811231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.811253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:05.302 [2024-11-20 08:34:52.811266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:05.302 [2024-11-20 08:34:52.811285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.811322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.302 [2024-11-20 08:34:52.817243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.817282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.302 [2024-11-20 08:34:52.817299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.941 ms 00:22:05.302 [2024-11-20 08:34:52.817310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.817361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.817375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:05.302 [2024-11-20 08:34:52.817389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:05.302 [2024-11-20 08:34:52.817401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.817445] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:05.302 [2024-11-20 08:34:52.817614] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:05.302 [2024-11-20 08:34:52.817640] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:05.302 [2024-11-20 08:34:52.817656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:05.302 [2024-11-20 08:34:52.817673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:05.302 [2024-11-20 08:34:52.817685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:05.302 [2024-11-20 08:34:52.817700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:05.302 [2024-11-20 08:34:52.817711] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:05.302 [2024-11-20 08:34:52.817732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:05.302 [2024-11-20 08:34:52.817743] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:05.302 [2024-11-20 08:34:52.817758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.817770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:05.302 [2024-11-20 08:34:52.817783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:22:05.302 [2024-11-20 08:34:52.817807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.817891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.302 [2024-11-20 08:34:52.817903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:05.302 [2024-11-20 08:34:52.817918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:05.302 [2024-11-20 08:34:52.817928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.302 [2024-11-20 08:34:52.818053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:05.302 [2024-11-20 08:34:52.818068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:05.302 [2024-11-20 08:34:52.818084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.302 [2024-11-20 08:34:52.818096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.302 [2024-11-20 08:34:52.818110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:05.302 [2024-11-20 08:34:52.818120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:05.302 [2024-11-20 08:34:52.818141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:05.302 [2024-11-20 08:34:52.818151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:05.302 [2024-11-20 08:34:52.818164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:05.302 [2024-11-20 08:34:52.818174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.302 [2024-11-20 08:34:52.818187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:05.302 [2024-11-20 08:34:52.818197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:05.302 [2024-11-20 08:34:52.818210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.302 [2024-11-20 08:34:52.818225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:05.303 [2024-11-20 08:34:52.818238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:05.303 [2024-11-20 08:34:52.818247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:05.303 [2024-11-20 08:34:52.818283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:05.303 [2024-11-20 08:34:52.818318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:05.303 [2024-11-20 08:34:52.818350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:05.303 [2024-11-20 08:34:52.818385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:05.303 [2024-11-20 08:34:52.818418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:05.303 [2024-11-20 08:34:52.818456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.303 [2024-11-20 08:34:52.818477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:05.303 [2024-11-20 08:34:52.818486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:05.303 [2024-11-20 08:34:52.818497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.303 [2024-11-20 08:34:52.818507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:05.303 [2024-11-20 08:34:52.818518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:05.303 [2024-11-20 08:34:52.818528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:05.303 [2024-11-20 08:34:52.818549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:05.303 [2024-11-20 08:34:52.818560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818569] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:05.303 [2024-11-20 08:34:52.818584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:05.303 [2024-11-20 08:34:52.818597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.303 [2024-11-20 08:34:52.818622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:05.303 [2024-11-20 08:34:52.818638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:05.303 [2024-11-20 08:34:52.818648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:05.303 [2024-11-20 08:34:52.818660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:05.303 [2024-11-20 08:34:52.818670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:05.303 [2024-11-20 08:34:52.818682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:05.303 [2024-11-20 08:34:52.818697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:05.303 [2024-11-20 08:34:52.818714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:05.303 [2024-11-20 08:34:52.818744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:05.303 [2024-11-20 08:34:52.818756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:05.303 [2024-11-20 08:34:52.818770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:05.303 [2024-11-20 08:34:52.818781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:05.303 [2024-11-20 08:34:52.818794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:05.303 [2024-11-20 08:34:52.818805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:05.303 [2024-11-20 08:34:52.818819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:05.303 [2024-11-20 08:34:52.818829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:05.303 [2024-11-20 08:34:52.818846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:05.303 [2024-11-20 08:34:52.818905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:05.303 [2024-11-20 08:34:52.818919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:05.303 [2024-11-20 08:34:52.818945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:05.303 [2024-11-20 08:34:52.818955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:05.303 [2024-11-20 08:34:52.818969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:05.303 [2024-11-20 08:34:52.818980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.303 [2024-11-20 08:34:52.819007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:05.303 [2024-11-20 08:34:52.819022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:22:05.303 [2024-11-20 08:34:52.819036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.303 [2024-11-20 08:34:52.819085] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:05.303 [2024-11-20 08:34:52.819106] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:09.515 [2024-11-20 08:34:56.875037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.875123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:09.515 [2024-11-20 08:34:56.875142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4062.533 ms 00:22:09.515 [2024-11-20 08:34:56.875157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.922261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.922567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.515 [2024-11-20 08:34:56.922596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.828 ms 00:22:09.515 [2024-11-20 08:34:56.922612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.922816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.922833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:09.515 [2024-11-20 08:34:56.922846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:09.515 [2024-11-20 08:34:56.922864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.976158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.976227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.515 [2024-11-20 08:34:56.976243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.302 ms 00:22:09.515 [2024-11-20 08:34:56.976258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.976319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.976341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.515 [2024-11-20 08:34:56.976353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.515 [2024-11-20 08:34:56.976367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.977218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.977241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.515 [2024-11-20 08:34:56.977253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:22:09.515 [2024-11-20 08:34:56.977267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:56.977387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:56.977403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.515 [2024-11-20 08:34:56.977418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:09.515 [2024-11-20 08:34:56.977435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:57.002733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:57.002803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.515 [2024-11-20 08:34:57.002837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.315 ms 00:22:09.515 [2024-11-20 08:34:57.002852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.515 [2024-11-20 08:34:57.017359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:09.515 [2024-11-20 08:34:57.022373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.515 [2024-11-20 08:34:57.022407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:09.515 [2024-11-20 08:34:57.022425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.402 ms 00:22:09.515 [2024-11-20 08:34:57.022437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.136737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.136839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:09.775 [2024-11-20 08:34:57.136864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.429 ms 00:22:09.775 [2024-11-20 08:34:57.136877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.137139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.137162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:09.775 [2024-11-20 08:34:57.137182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:22:09.775 [2024-11-20 08:34:57.137193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.174788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.175065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:09.775 [2024-11-20 08:34:57.175096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.566 ms 00:22:09.775 [2024-11-20 08:34:57.175109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.210906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.211112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:09.775 [2024-11-20 08:34:57.211144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.799 ms 00:22:09.775 [2024-11-20 08:34:57.211157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.211963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.211998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:09.775 [2024-11-20 08:34:57.212016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:22:09.775 [2024-11-20 08:34:57.212028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.775 [2024-11-20 08:34:57.320453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.775 [2024-11-20 08:34:57.320521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:09.775 [2024-11-20 08:34:57.320549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.525 ms 00:22:09.775 [2024-11-20 08:34:57.320562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.360524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.035 [2024-11-20 08:34:57.360802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:10.035 [2024-11-20 08:34:57.360834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.931 ms 00:22:10.035 [2024-11-20 08:34:57.360847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.398618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.035 [2024-11-20 08:34:57.398667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:10.035 [2024-11-20 08:34:57.398687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.778 ms 00:22:10.035 [2024-11-20 08:34:57.398698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.436481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.035 [2024-11-20 08:34:57.436528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:10.035 [2024-11-20 08:34:57.436547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.792 ms 00:22:10.035 [2024-11-20 08:34:57.436559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.436613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.035 [2024-11-20 08:34:57.436627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:10.035 [2024-11-20 08:34:57.436646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:10.035 [2024-11-20 08:34:57.436657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.436811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.035 [2024-11-20 08:34:57.436828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:10.035 [2024-11-20 08:34:57.436847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:10.035 [2024-11-20 08:34:57.436857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.035 [2024-11-20 08:34:57.438636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4652.787 ms, result 0 00:22:10.035 { 00:22:10.035 "name": "ftl0", 00:22:10.035 "uuid": "232784ec-97dd-4428-aed9-3dfdc45d7381" 00:22:10.035 } 00:22:10.035 08:34:57 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:10.035 08:34:57 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:10.295 08:34:57 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:10.295 08:34:57 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:10.556 [2024-11-20 08:34:57.872618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.872878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.556 [2024-11-20 08:34:57.872908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:10.556 [2024-11-20 08:34:57.872934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.872973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.556 [2024-11-20 08:34:57.877819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.877857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.556 [2024-11-20 08:34:57.877874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:22:10.556 [2024-11-20 08:34:57.877886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.878195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.878213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.556 [2024-11-20 08:34:57.878233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:22:10.556 [2024-11-20 08:34:57.878245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.880789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.880814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.556 [2024-11-20 08:34:57.880829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:22:10.556 [2024-11-20 08:34:57.880841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.886055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.886223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.556 [2024-11-20 08:34:57.886255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.197 ms 00:22:10.556 [2024-11-20 08:34:57.886267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.923707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.923751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.556 [2024-11-20 08:34:57.923769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:22:10.556 [2024-11-20 08:34:57.923780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.946414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.946458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.556 [2024-11-20 08:34:57.946477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.614 ms 00:22:10.556 [2024-11-20 08:34:57.946488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.946680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.946694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.556 [2024-11-20 08:34:57.946710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:10.556 [2024-11-20 08:34:57.946721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.556 [2024-11-20 08:34:57.983513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.556 [2024-11-20 08:34:57.983553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:10.556 [2024-11-20 08:34:57.983571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.824 ms 00:22:10.557 [2024-11-20 08:34:57.983583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.557 [2024-11-20 08:34:58.019925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.557 [2024-11-20 08:34:58.019969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:10.557 [2024-11-20 08:34:58.019998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.350 ms 00:22:10.557 [2024-11-20 08:34:58.020010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.557 [2024-11-20 08:34:58.055772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.557 [2024-11-20 08:34:58.055816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.557 [2024-11-20 08:34:58.055833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.766 ms 00:22:10.557 [2024-11-20 08:34:58.055844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.557 [2024-11-20 08:34:58.092727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.557 [2024-11-20 08:34:58.092767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.557 [2024-11-20 08:34:58.092785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.808 ms 00:22:10.557 [2024-11-20 08:34:58.092796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.557 [2024-11-20 08:34:58.092845] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.557 [2024-11-20 08:34:58.092866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.092979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.557 [2024-11-20 08:34:58.093976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.558 [2024-11-20 08:34:58.094422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.558 [2024-11-20 08:34:58.094440] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 232784ec-97dd-4428-aed9-3dfdc45d7381 00:22:10.558 [2024-11-20 08:34:58.094451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:10.558 [2024-11-20 08:34:58.094469] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:10.558 [2024-11-20 08:34:58.094479] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:10.558 [2024-11-20 08:34:58.094498] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:10.558 [2024-11-20 08:34:58.094508] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.558 [2024-11-20 08:34:58.094522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.558 [2024-11-20 08:34:58.094532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.558 [2024-11-20 08:34:58.094545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.558 [2024-11-20 08:34:58.094554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.558 [2024-11-20 08:34:58.094567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.558 [2024-11-20 08:34:58.094579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.558 [2024-11-20 08:34:58.094593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:22:10.558 [2024-11-20 08:34:58.094604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.115741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.818 [2024-11-20 08:34:58.115943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.818 [2024-11-20 08:34:58.115971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.106 ms 00:22:10.818 [2024-11-20 08:34:58.115983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.116640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.818 [2024-11-20 08:34:58.116657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.818 [2024-11-20 08:34:58.116673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:22:10.818 [2024-11-20 08:34:58.116695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.184920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.818 [2024-11-20 08:34:58.184974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.818 [2024-11-20 08:34:58.185012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.818 [2024-11-20 08:34:58.185025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.185119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.818 [2024-11-20 08:34:58.185133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.818 [2024-11-20 08:34:58.185147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.818 [2024-11-20 08:34:58.185162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.185312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.818 [2024-11-20 08:34:58.185328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.818 [2024-11-20 08:34:58.185343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.818 [2024-11-20 08:34:58.185354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.185384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.818 [2024-11-20 08:34:58.185395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.818 [2024-11-20 08:34:58.185409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.818 [2024-11-20 08:34:58.185419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.818 [2024-11-20 08:34:58.320506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.818 [2024-11-20 08:34:58.320588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.818 [2024-11-20 08:34:58.320610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.818 [2024-11-20 08:34:58.320622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.078 [2024-11-20 08:34:58.427212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.078 [2024-11-20 08:34:58.427294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.078 [2024-11-20 08:34:58.427314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.078 [2024-11-20 08:34:58.427330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.078 [2024-11-20 08:34:58.427496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.078 [2024-11-20 08:34:58.427517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.078 [2024-11-20 08:34:58.427532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.078 [2024-11-20 08:34:58.427544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.078 [2024-11-20 08:34:58.427619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.078 [2024-11-20 08:34:58.427632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.078 [2024-11-20 08:34:58.427647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.078 [2024-11-20 08:34:58.427657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.078 [2024-11-20 08:34:58.427811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.078 [2024-11-20 08:34:58.427825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.078 [2024-11-20 08:34:58.427841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.078 [2024-11-20 08:34:58.427851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.078 [2024-11-20 08:34:58.427897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.078 [2024-11-20 08:34:58.427910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.078 [2024-11-20 08:34:58.427923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.079 [2024-11-20 08:34:58.427934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.079 [2024-11-20 08:34:58.428015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.079 [2024-11-20 08:34:58.428032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.079 [2024-11-20 08:34:58.428047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.079 [2024-11-20 08:34:58.428059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.079 [2024-11-20 08:34:58.428125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.079 [2024-11-20 08:34:58.428137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.079 [2024-11-20 08:34:58.428151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.079 [2024-11-20 08:34:58.428162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.079 [2024-11-20 08:34:58.428338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.567 ms, result 0 00:22:11.079 true 00:22:11.079 08:34:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76051 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' -z 76051 ']' 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@961 -- # kill -0 76051 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@962 -- # uname 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 76051 00:22:11.079 killing process with pid 76051 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'killing process with pid 76051' 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@976 -- # kill 76051 00:22:11.079 08:34:58 ftl.ftl_restore -- common/autotest_common.sh@981 -- # wait 76051 00:22:13.618 08:35:01 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:17.816 262144+0 records in 00:22:17.816 262144+0 records out 00:22:17.816 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.203 s, 255 MB/s 00:22:17.816 08:35:05 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:19.725 08:35:07 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:19.725 [2024-11-20 08:35:07.139257] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:22:19.725 [2024-11-20 08:35:07.139396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76298 ] 00:22:19.984 [2024-11-20 08:35:07.325736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.984 [2024-11-20 08:35:07.477277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.555 [2024-11-20 08:35:07.892756] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.555 [2024-11-20 08:35:07.892845] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.555 [2024-11-20 08:35:08.065295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.065601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:20.555 [2024-11-20 08:35:08.065640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:20.555 [2024-11-20 08:35:08.065653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.065728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.065742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.555 [2024-11-20 08:35:08.065758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:20.555 [2024-11-20 08:35:08.065769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.065793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:20.555 [2024-11-20 08:35:08.066829] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:20.555 [2024-11-20 08:35:08.066860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.066872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.555 [2024-11-20 08:35:08.066885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:22:20.555 [2024-11-20 08:35:08.066895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.069432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:20.555 [2024-11-20 08:35:08.090776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.090820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:20.555 [2024-11-20 08:35:08.090837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.379 ms 00:22:20.555 [2024-11-20 08:35:08.090848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.090925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.090939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:20.555 [2024-11-20 08:35:08.090951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:20.555 [2024-11-20 08:35:08.090961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.103060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.103225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.555 [2024-11-20 08:35:08.103248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.874 ms 00:22:20.555 [2024-11-20 08:35:08.103260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.103365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.103379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.555 [2024-11-20 08:35:08.103391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:20.555 [2024-11-20 08:35:08.103402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.103469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.103481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:20.555 [2024-11-20 08:35:08.103493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:20.555 [2024-11-20 08:35:08.103504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.103536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:20.555 [2024-11-20 08:35:08.109331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.109367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.555 [2024-11-20 08:35:08.109380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.816 ms 00:22:20.555 [2024-11-20 08:35:08.109395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.109429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.109441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:20.555 [2024-11-20 08:35:08.109452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:20.555 [2024-11-20 08:35:08.109463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.109503] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:20.555 [2024-11-20 08:35:08.109530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:20.555 [2024-11-20 08:35:08.109569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:20.555 [2024-11-20 08:35:08.109593] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:20.555 [2024-11-20 08:35:08.109688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:20.555 [2024-11-20 08:35:08.109702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:20.555 [2024-11-20 08:35:08.109716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:20.555 [2024-11-20 08:35:08.109730] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:20.555 [2024-11-20 08:35:08.109744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:20.555 [2024-11-20 08:35:08.109756] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:20.555 [2024-11-20 08:35:08.109766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:20.555 [2024-11-20 08:35:08.109777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:20.555 [2024-11-20 08:35:08.109787] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:20.555 [2024-11-20 08:35:08.109802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.109812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:20.555 [2024-11-20 08:35:08.109823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:22:20.555 [2024-11-20 08:35:08.109833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.109906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.555 [2024-11-20 08:35:08.109917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:20.555 [2024-11-20 08:35:08.109927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:20.555 [2024-11-20 08:35:08.109937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.555 [2024-11-20 08:35:08.110054] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:20.555 [2024-11-20 08:35:08.110075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:20.555 [2024-11-20 08:35:08.110087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:20.555 [2024-11-20 08:35:08.110118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:20.555 [2024-11-20 08:35:08.110157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.555 [2024-11-20 08:35:08.110178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:20.555 [2024-11-20 08:35:08.110188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:20.555 [2024-11-20 08:35:08.110198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.555 [2024-11-20 08:35:08.110207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:20.555 [2024-11-20 08:35:08.110217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:20.555 [2024-11-20 08:35:08.110237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:20.555 [2024-11-20 08:35:08.110256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:20.555 [2024-11-20 08:35:08.110286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:20.555 [2024-11-20 08:35:08.110315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:20.555 [2024-11-20 08:35:08.110343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:20.555 [2024-11-20 08:35:08.110371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.555 [2024-11-20 08:35:08.110391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:20.555 [2024-11-20 08:35:08.110400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:20.555 [2024-11-20 08:35:08.110410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.555 [2024-11-20 08:35:08.110419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:20.556 [2024-11-20 08:35:08.110429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:20.556 [2024-11-20 08:35:08.110437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.556 [2024-11-20 08:35:08.110447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:20.556 [2024-11-20 08:35:08.110456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:20.556 [2024-11-20 08:35:08.110465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.556 [2024-11-20 08:35:08.110474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:20.556 [2024-11-20 08:35:08.110485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:20.556 [2024-11-20 08:35:08.110494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.556 [2024-11-20 08:35:08.110502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:20.556 [2024-11-20 08:35:08.110512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:20.556 [2024-11-20 08:35:08.110522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.556 [2024-11-20 08:35:08.110531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.556 [2024-11-20 08:35:08.110541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:20.556 [2024-11-20 08:35:08.110551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:20.556 [2024-11-20 08:35:08.110560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:20.556 [2024-11-20 08:35:08.110570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:20.556 [2024-11-20 08:35:08.110579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:20.556 [2024-11-20 08:35:08.110588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:20.556 [2024-11-20 08:35:08.110599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:20.556 [2024-11-20 08:35:08.110612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:20.556 [2024-11-20 08:35:08.110635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:20.556 [2024-11-20 08:35:08.110645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:20.556 [2024-11-20 08:35:08.110655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:20.556 [2024-11-20 08:35:08.110665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:20.556 [2024-11-20 08:35:08.110675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:20.556 [2024-11-20 08:35:08.110685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:20.556 [2024-11-20 08:35:08.110695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:20.556 [2024-11-20 08:35:08.110705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:20.556 [2024-11-20 08:35:08.110715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:20.556 [2024-11-20 08:35:08.110764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:20.556 [2024-11-20 08:35:08.110780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:20.556 [2024-11-20 08:35:08.110801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:20.556 [2024-11-20 08:35:08.110812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:20.556 [2024-11-20 08:35:08.110822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:20.556 [2024-11-20 08:35:08.110833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.556 [2024-11-20 08:35:08.110844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:20.556 [2024-11-20 08:35:08.110854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:22:20.556 [2024-11-20 08:35:08.110864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.816 [2024-11-20 08:35:08.161728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.816 [2024-11-20 08:35:08.161783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.816 [2024-11-20 08:35:08.161800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.891 ms 00:22:20.816 [2024-11-20 08:35:08.161812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.816 [2024-11-20 08:35:08.161935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.816 [2024-11-20 08:35:08.161949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:20.816 [2024-11-20 08:35:08.161960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:20.816 [2024-11-20 08:35:08.161970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.816 [2024-11-20 08:35:08.229611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.816 [2024-11-20 08:35:08.229675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.816 [2024-11-20 08:35:08.229693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.611 ms 00:22:20.816 [2024-11-20 08:35:08.229704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.816 [2024-11-20 08:35:08.229780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.816 [2024-11-20 08:35:08.229792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.816 [2024-11-20 08:35:08.229813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.816 [2024-11-20 08:35:08.229824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.816 [2024-11-20 08:35:08.230651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.230674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.817 [2024-11-20 08:35:08.230686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:22:20.817 [2024-11-20 08:35:08.230696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.230847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.230862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.817 [2024-11-20 08:35:08.230874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:22:20.817 [2024-11-20 08:35:08.230894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.254876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.254928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.817 [2024-11-20 08:35:08.254948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.996 ms 00:22:20.817 [2024-11-20 08:35:08.254960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.275762] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:20.817 [2024-11-20 08:35:08.275813] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:20.817 [2024-11-20 08:35:08.275831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.275844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:20.817 [2024-11-20 08:35:08.275857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.724 ms 00:22:20.817 [2024-11-20 08:35:08.275868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.307173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.307414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:20.817 [2024-11-20 08:35:08.307455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.298 ms 00:22:20.817 [2024-11-20 08:35:08.307467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.327314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.327380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:20.817 [2024-11-20 08:35:08.327395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.794 ms 00:22:20.817 [2024-11-20 08:35:08.327406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.346303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.346346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:20.817 [2024-11-20 08:35:08.346361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:22:20.817 [2024-11-20 08:35:08.346372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-20 08:35:08.347263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-20 08:35:08.347288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:20.817 [2024-11-20 08:35:08.347302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:22:20.817 [2024-11-20 08:35:08.347313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.446556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.446645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.077 [2024-11-20 08:35:08.446664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.372 ms 00:22:21.077 [2024-11-20 08:35:08.446687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.458544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:21.077 [2024-11-20 08:35:08.463348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.463383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.077 [2024-11-20 08:35:08.463400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.605 ms 00:22:21.077 [2024-11-20 08:35:08.463411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.463547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.463562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.077 [2024-11-20 08:35:08.463574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:21.077 [2024-11-20 08:35:08.463585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.463683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.463698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.077 [2024-11-20 08:35:08.463710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:21.077 [2024-11-20 08:35:08.463721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.463750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.463762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.077 [2024-11-20 08:35:08.463773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:21.077 [2024-11-20 08:35:08.463784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.463829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.077 [2024-11-20 08:35:08.463842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.463859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.077 [2024-11-20 08:35:08.463871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:21.077 [2024-11-20 08:35:08.463881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.502592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.502644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.077 [2024-11-20 08:35:08.502661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.750 ms 00:22:21.077 [2024-11-20 08:35:08.502673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.502780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.077 [2024-11-20 08:35:08.502796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.077 [2024-11-20 08:35:08.502807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:21.077 [2024-11-20 08:35:08.502819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.077 [2024-11-20 08:35:08.504378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 439.181 ms, result 0 00:22:22.015  [2024-11-20T08:35:10.515Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T08:35:11.896Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-20T08:35:12.835Z] Copying: 67/1024 [MB] (22 MBps) [2024-11-20T08:35:13.774Z] Copying: 89/1024 [MB] (22 MBps) [2024-11-20T08:35:14.712Z] Copying: 112/1024 [MB] (22 MBps) [2024-11-20T08:35:15.667Z] Copying: 135/1024 [MB] (22 MBps) [2024-11-20T08:35:16.634Z] Copying: 158/1024 [MB] (23 MBps) [2024-11-20T08:35:17.575Z] Copying: 181/1024 [MB] (22 MBps) [2024-11-20T08:35:18.515Z] Copying: 204/1024 [MB] (23 MBps) [2024-11-20T08:35:19.894Z] Copying: 232/1024 [MB] (27 MBps) [2024-11-20T08:35:20.832Z] Copying: 257/1024 [MB] (24 MBps) [2024-11-20T08:35:21.772Z] Copying: 283/1024 [MB] (25 MBps) [2024-11-20T08:35:22.711Z] Copying: 308/1024 [MB] (24 MBps) [2024-11-20T08:35:23.651Z] Copying: 333/1024 [MB] (25 MBps) [2024-11-20T08:35:24.589Z] Copying: 358/1024 [MB] (24 MBps) [2024-11-20T08:35:25.556Z] Copying: 384/1024 [MB] (26 MBps) [2024-11-20T08:35:26.492Z] Copying: 410/1024 [MB] (26 MBps) [2024-11-20T08:35:27.871Z] Copying: 436/1024 [MB] (25 MBps) [2024-11-20T08:35:28.809Z] Copying: 460/1024 [MB] (24 MBps) [2024-11-20T08:35:29.755Z] Copying: 485/1024 [MB] (24 MBps) [2024-11-20T08:35:30.692Z] Copying: 508/1024 [MB] (23 MBps) [2024-11-20T08:35:31.629Z] Copying: 534/1024 [MB] (25 MBps) [2024-11-20T08:35:32.566Z] Copying: 559/1024 [MB] (25 MBps) [2024-11-20T08:35:33.503Z] Copying: 584/1024 [MB] (25 MBps) [2024-11-20T08:35:34.880Z] Copying: 610/1024 [MB] (26 MBps) [2024-11-20T08:35:35.817Z] Copying: 637/1024 [MB] (27 MBps) [2024-11-20T08:35:36.754Z] Copying: 662/1024 [MB] (24 MBps) [2024-11-20T08:35:37.693Z] Copying: 687/1024 [MB] (24 MBps) [2024-11-20T08:35:38.631Z] Copying: 711/1024 [MB] (24 MBps) [2024-11-20T08:35:39.569Z] Copying: 737/1024 [MB] (25 MBps) [2024-11-20T08:35:40.506Z] Copying: 763/1024 [MB] (25 MBps) [2024-11-20T08:35:41.884Z] Copying: 787/1024 [MB] (24 MBps) [2024-11-20T08:35:42.818Z] Copying: 812/1024 [MB] (24 MBps) [2024-11-20T08:35:43.760Z] Copying: 837/1024 [MB] (24 MBps) [2024-11-20T08:35:44.704Z] Copying: 862/1024 [MB] (25 MBps) [2024-11-20T08:35:45.641Z] Copying: 887/1024 [MB] (25 MBps) [2024-11-20T08:35:46.664Z] Copying: 912/1024 [MB] (24 MBps) [2024-11-20T08:35:47.601Z] Copying: 935/1024 [MB] (23 MBps) [2024-11-20T08:35:48.538Z] Copying: 961/1024 [MB] (26 MBps) [2024-11-20T08:35:49.475Z] Copying: 987/1024 [MB] (25 MBps) [2024-11-20T08:35:50.042Z] Copying: 1012/1024 [MB] (24 MBps) [2024-11-20T08:35:50.042Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 08:35:49.942947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.481 [2024-11-20 08:35:49.943031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.481 [2024-11-20 08:35:49.943049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.481 [2024-11-20 08:35:49.943060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-20 08:35:49.943084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.481 [2024-11-20 08:35:49.947109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:49.947145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.482 [2024-11-20 08:35:49.947158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:23:02.482 [2024-11-20 08:35:49.947168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:49.949234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:49.949416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.482 [2024-11-20 08:35:49.949437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:23:02.482 [2024-11-20 08:35:49.949448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:49.967098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:49.967154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.482 [2024-11-20 08:35:49.967169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.653 ms 00:23:02.482 [2024-11-20 08:35:49.967180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:49.972206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:49.972356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.482 [2024-11-20 08:35:49.972376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:23:02.482 [2024-11-20 08:35:49.972387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:50.010063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:50.010105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.482 [2024-11-20 08:35:50.010128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.674 ms 00:23:02.482 [2024-11-20 08:35:50.010139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:50.031882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:50.032058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.482 [2024-11-20 08:35:50.032080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.738 ms 00:23:02.482 [2024-11-20 08:35:50.032091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-20 08:35:50.032237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.482 [2024-11-20 08:35:50.032252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.482 [2024-11-20 08:35:50.032270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:02.482 [2024-11-20 08:35:50.032280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.742 [2024-11-20 08:35:50.069552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.742 [2024-11-20 08:35:50.069609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:02.742 [2024-11-20 08:35:50.069624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.316 ms 00:23:02.742 [2024-11-20 08:35:50.069634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.742 [2024-11-20 08:35:50.106170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.742 [2024-11-20 08:35:50.106211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:02.742 [2024-11-20 08:35:50.106239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.556 ms 00:23:02.742 [2024-11-20 08:35:50.106248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.742 [2024-11-20 08:35:50.142706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.742 [2024-11-20 08:35:50.142877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.742 [2024-11-20 08:35:50.142899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.477 ms 00:23:02.742 [2024-11-20 08:35:50.142909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.742 [2024-11-20 08:35:50.178966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.742 [2024-11-20 08:35:50.179036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.742 [2024-11-20 08:35:50.179051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.996 ms 00:23:02.742 [2024-11-20 08:35:50.179061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.742 [2024-11-20 08:35:50.179104] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.742 [2024-11-20 08:35:50.179120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.742 [2024-11-20 08:35:50.179728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.179981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.743 [2024-11-20 08:35:50.180330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.743 [2024-11-20 08:35:50.180346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 232784ec-97dd-4428-aed9-3dfdc45d7381 00:23:02.743 [2024-11-20 08:35:50.180357] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:02.743 [2024-11-20 08:35:50.180371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:02.743 [2024-11-20 08:35:50.180380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:02.743 [2024-11-20 08:35:50.180390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:02.743 [2024-11-20 08:35:50.180400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.743 [2024-11-20 08:35:50.180411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.743 [2024-11-20 08:35:50.180420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.743 [2024-11-20 08:35:50.180440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.743 [2024-11-20 08:35:50.180449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.743 [2024-11-20 08:35:50.180459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.743 [2024-11-20 08:35:50.180470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.743 [2024-11-20 08:35:50.180481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:23:02.743 [2024-11-20 08:35:50.180491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.200146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.743 [2024-11-20 08:35:50.200185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.743 [2024-11-20 08:35:50.200199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.650 ms 00:23:02.743 [2024-11-20 08:35:50.200209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.200757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.743 [2024-11-20 08:35:50.200771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.743 [2024-11-20 08:35:50.200782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:02.743 [2024-11-20 08:35:50.200792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.259391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.743 [2024-11-20 08:35:50.259446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.743 [2024-11-20 08:35:50.259462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.743 [2024-11-20 08:35:50.259472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.259540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.743 [2024-11-20 08:35:50.259552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.743 [2024-11-20 08:35:50.259562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.743 [2024-11-20 08:35:50.259571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.259678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.743 [2024-11-20 08:35:50.259691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.743 [2024-11-20 08:35:50.259702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.743 [2024-11-20 08:35:50.259711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.743 [2024-11-20 08:35:50.259728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.743 [2024-11-20 08:35:50.259739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.743 [2024-11-20 08:35:50.259749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.743 [2024-11-20 08:35:50.259758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.384037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.384123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:03.003 [2024-11-20 08:35:50.384140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.384150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:03.003 [2024-11-20 08:35:50.487120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:03.003 [2024-11-20 08:35:50.487267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:03.003 [2024-11-20 08:35:50.487356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:03.003 [2024-11-20 08:35:50.487511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:03.003 [2024-11-20 08:35:50.487585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.003 [2024-11-20 08:35:50.487658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.003 [2024-11-20 08:35:50.487720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.003 [2024-11-20 08:35:50.487730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.003 [2024-11-20 08:35:50.487740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.003 [2024-11-20 08:35:50.487859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.763 ms, result 0 00:23:04.383 00:23:04.383 00:23:04.383 08:35:51 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:04.383 [2024-11-20 08:35:51.789733] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:23:04.383 [2024-11-20 08:35:51.789864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76758 ] 00:23:04.642 [2024-11-20 08:35:51.974528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.642 [2024-11-20 08:35:52.134539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.213 [2024-11-20 08:35:52.498972] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.213 [2024-11-20 08:35:52.499061] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.213 [2024-11-20 08:35:52.660776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.660838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:05.213 [2024-11-20 08:35:52.660858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:05.213 [2024-11-20 08:35:52.660869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.660923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.660936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.213 [2024-11-20 08:35:52.660950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:05.213 [2024-11-20 08:35:52.660960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.660982] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:05.213 [2024-11-20 08:35:52.662104] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:05.213 [2024-11-20 08:35:52.662281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.662298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.213 [2024-11-20 08:35:52.662310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:23:05.213 [2024-11-20 08:35:52.662322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.663878] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:05.213 [2024-11-20 08:35:52.683015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.683065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:05.213 [2024-11-20 08:35:52.683080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.168 ms 00:23:05.213 [2024-11-20 08:35:52.683092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.683172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.683186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:05.213 [2024-11-20 08:35:52.683197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:05.213 [2024-11-20 08:35:52.683207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.690444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.690481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.213 [2024-11-20 08:35:52.690494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.169 ms 00:23:05.213 [2024-11-20 08:35:52.690504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.690593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.690608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.213 [2024-11-20 08:35:52.690619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:05.213 [2024-11-20 08:35:52.690629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.213 [2024-11-20 08:35:52.690678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.213 [2024-11-20 08:35:52.690689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:05.214 [2024-11-20 08:35:52.690700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:05.214 [2024-11-20 08:35:52.690710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.214 [2024-11-20 08:35:52.690736] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:05.214 [2024-11-20 08:35:52.695510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.214 [2024-11-20 08:35:52.695546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.214 [2024-11-20 08:35:52.695559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.788 ms 00:23:05.214 [2024-11-20 08:35:52.695573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.214 [2024-11-20 08:35:52.695605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.214 [2024-11-20 08:35:52.695616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:05.214 [2024-11-20 08:35:52.695626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:05.214 [2024-11-20 08:35:52.695636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.214 [2024-11-20 08:35:52.695696] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:05.214 [2024-11-20 08:35:52.695720] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:05.214 [2024-11-20 08:35:52.695755] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:05.214 [2024-11-20 08:35:52.695776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:05.214 [2024-11-20 08:35:52.695865] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:05.214 [2024-11-20 08:35:52.695878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:05.214 [2024-11-20 08:35:52.695892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:05.214 [2024-11-20 08:35:52.695905] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:05.214 [2024-11-20 08:35:52.695918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:05.214 [2024-11-20 08:35:52.695930] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:05.214 [2024-11-20 08:35:52.695940] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:05.214 [2024-11-20 08:35:52.695949] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:05.214 [2024-11-20 08:35:52.695959] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:05.214 [2024-11-20 08:35:52.695973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.214 [2024-11-20 08:35:52.695983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:05.214 [2024-11-20 08:35:52.696016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:23:05.214 [2024-11-20 08:35:52.696026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.214 [2024-11-20 08:35:52.696102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.214 [2024-11-20 08:35:52.696114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:05.214 [2024-11-20 08:35:52.696124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:05.214 [2024-11-20 08:35:52.696133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.214 [2024-11-20 08:35:52.696228] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:05.214 [2024-11-20 08:35:52.696247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:05.214 [2024-11-20 08:35:52.696258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:05.214 [2024-11-20 08:35:52.696290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:05.214 [2024-11-20 08:35:52.696319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.214 [2024-11-20 08:35:52.696338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:05.214 [2024-11-20 08:35:52.696348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:05.214 [2024-11-20 08:35:52.696357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.214 [2024-11-20 08:35:52.696367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:05.214 [2024-11-20 08:35:52.696376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:05.214 [2024-11-20 08:35:52.696395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:05.214 [2024-11-20 08:35:52.696414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:05.214 [2024-11-20 08:35:52.696442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:05.214 [2024-11-20 08:35:52.696469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:05.214 [2024-11-20 08:35:52.696497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:05.214 [2024-11-20 08:35:52.696525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:05.214 [2024-11-20 08:35:52.696552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.214 [2024-11-20 08:35:52.696570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:05.214 [2024-11-20 08:35:52.696579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:05.214 [2024-11-20 08:35:52.696587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.214 [2024-11-20 08:35:52.696597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:05.214 [2024-11-20 08:35:52.696606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:05.214 [2024-11-20 08:35:52.696615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:05.214 [2024-11-20 08:35:52.696632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:05.214 [2024-11-20 08:35:52.696644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696653] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:05.214 [2024-11-20 08:35:52.696663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:05.214 [2024-11-20 08:35:52.696672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.214 [2024-11-20 08:35:52.696691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:05.214 [2024-11-20 08:35:52.696701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:05.214 [2024-11-20 08:35:52.696710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:05.214 [2024-11-20 08:35:52.696720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:05.214 [2024-11-20 08:35:52.696729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:05.214 [2024-11-20 08:35:52.696738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:05.214 [2024-11-20 08:35:52.696749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:05.214 [2024-11-20 08:35:52.696762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.214 [2024-11-20 08:35:52.696773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:05.214 [2024-11-20 08:35:52.696783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:05.214 [2024-11-20 08:35:52.696793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:05.214 [2024-11-20 08:35:52.696804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:05.214 [2024-11-20 08:35:52.696814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:05.214 [2024-11-20 08:35:52.696824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:05.214 [2024-11-20 08:35:52.696834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:05.214 [2024-11-20 08:35:52.696844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:05.214 [2024-11-20 08:35:52.696855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:05.214 [2024-11-20 08:35:52.696865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:05.214 [2024-11-20 08:35:52.696875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:05.215 [2024-11-20 08:35:52.696885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:05.215 [2024-11-20 08:35:52.696896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:05.215 [2024-11-20 08:35:52.696906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:05.215 [2024-11-20 08:35:52.696916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:05.215 [2024-11-20 08:35:52.696931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.215 [2024-11-20 08:35:52.696943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:05.215 [2024-11-20 08:35:52.696953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:05.215 [2024-11-20 08:35:52.696963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:05.215 [2024-11-20 08:35:52.696974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:05.215 [2024-11-20 08:35:52.696996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.215 [2024-11-20 08:35:52.697007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:05.215 [2024-11-20 08:35:52.697018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:23:05.215 [2024-11-20 08:35:52.697028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.215 [2024-11-20 08:35:52.736221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.215 [2024-11-20 08:35:52.736492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.215 [2024-11-20 08:35:52.736639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.206 ms 00:23:05.215 [2024-11-20 08:35:52.736678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.215 [2024-11-20 08:35:52.736809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.215 [2024-11-20 08:35:52.736926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:05.215 [2024-11-20 08:35:52.737020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:05.215 [2024-11-20 08:35:52.737052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.794349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.794586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.475 [2024-11-20 08:35:52.794671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.287 ms 00:23:05.475 [2024-11-20 08:35:52.794707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.794794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.794827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.475 [2024-11-20 08:35:52.794859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:05.475 [2024-11-20 08:35:52.794951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.795508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.795660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.475 [2024-11-20 08:35:52.795736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:23:05.475 [2024-11-20 08:35:52.795771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.795921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.795958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.475 [2024-11-20 08:35:52.796077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:05.475 [2024-11-20 08:35:52.796123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.814018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.814238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.475 [2024-11-20 08:35:52.814394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.871 ms 00:23:05.475 [2024-11-20 08:35:52.814433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.833560] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:05.475 [2024-11-20 08:35:52.833827] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:05.475 [2024-11-20 08:35:52.833981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.834029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:05.475 [2024-11-20 08:35:52.834062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.418 ms 00:23:05.475 [2024-11-20 08:35:52.834130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.865506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.865789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:05.475 [2024-11-20 08:35:52.865934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.322 ms 00:23:05.475 [2024-11-20 08:35:52.865952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.885369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.885428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:05.475 [2024-11-20 08:35:52.885443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.366 ms 00:23:05.475 [2024-11-20 08:35:52.885454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.904123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.904320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:05.475 [2024-11-20 08:35:52.904344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.642 ms 00:23:05.475 [2024-11-20 08:35:52.904355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.905143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.905171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:05.475 [2024-11-20 08:35:52.905184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:23:05.475 [2024-11-20 08:35:52.905199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:52.992654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:52.992725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:05.475 [2024-11-20 08:35:52.992750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.564 ms 00:23:05.475 [2024-11-20 08:35:52.992761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:53.005273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:05.475 [2024-11-20 08:35:53.008563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:53.008600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:05.475 [2024-11-20 08:35:53.008615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.746 ms 00:23:05.475 [2024-11-20 08:35:53.008627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:53.008743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:53.008757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:05.475 [2024-11-20 08:35:53.008769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:05.475 [2024-11-20 08:35:53.008782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:53.008875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:53.008889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:05.475 [2024-11-20 08:35:53.008900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:05.475 [2024-11-20 08:35:53.008910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:53.008934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:53.008945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:05.475 [2024-11-20 08:35:53.008955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:05.475 [2024-11-20 08:35:53.008966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.475 [2024-11-20 08:35:53.009015] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:05.475 [2024-11-20 08:35:53.009031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.475 [2024-11-20 08:35:53.009041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:05.475 [2024-11-20 08:35:53.009052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:05.475 [2024-11-20 08:35:53.009062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.735 [2024-11-20 08:35:53.045767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.735 [2024-11-20 08:35:53.045828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:05.735 [2024-11-20 08:35:53.045846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.740 ms 00:23:05.735 [2024-11-20 08:35:53.045864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.735 [2024-11-20 08:35:53.045961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.735 [2024-11-20 08:35:53.045974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:05.735 [2024-11-20 08:35:53.045985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:05.735 [2024-11-20 08:35:53.046011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.735 [2024-11-20 08:35:53.047249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.634 ms, result 0 00:23:07.112  [2024-11-20T08:35:55.612Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T08:35:56.548Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-20T08:35:57.496Z] Copying: 80/1024 [MB] (27 MBps) [2024-11-20T08:35:58.456Z] Copying: 107/1024 [MB] (27 MBps) [2024-11-20T08:35:59.396Z] Copying: 135/1024 [MB] (27 MBps) [2024-11-20T08:36:00.333Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-20T08:36:01.268Z] Copying: 192/1024 [MB] (29 MBps) [2024-11-20T08:36:02.646Z] Copying: 221/1024 [MB] (28 MBps) [2024-11-20T08:36:03.584Z] Copying: 249/1024 [MB] (27 MBps) [2024-11-20T08:36:04.564Z] Copying: 277/1024 [MB] (28 MBps) [2024-11-20T08:36:05.501Z] Copying: 306/1024 [MB] (29 MBps) [2024-11-20T08:36:06.439Z] Copying: 334/1024 [MB] (28 MBps) [2024-11-20T08:36:07.426Z] Copying: 363/1024 [MB] (28 MBps) [2024-11-20T08:36:08.361Z] Copying: 393/1024 [MB] (29 MBps) [2024-11-20T08:36:09.302Z] Copying: 423/1024 [MB] (29 MBps) [2024-11-20T08:36:10.683Z] Copying: 451/1024 [MB] (28 MBps) [2024-11-20T08:36:11.251Z] Copying: 478/1024 [MB] (27 MBps) [2024-11-20T08:36:12.284Z] Copying: 507/1024 [MB] (28 MBps) [2024-11-20T08:36:13.662Z] Copying: 537/1024 [MB] (30 MBps) [2024-11-20T08:36:14.600Z] Copying: 567/1024 [MB] (29 MBps) [2024-11-20T08:36:15.538Z] Copying: 596/1024 [MB] (29 MBps) [2024-11-20T08:36:16.474Z] Copying: 625/1024 [MB] (28 MBps) [2024-11-20T08:36:17.413Z] Copying: 655/1024 [MB] (29 MBps) [2024-11-20T08:36:18.351Z] Copying: 686/1024 [MB] (30 MBps) [2024-11-20T08:36:19.289Z] Copying: 716/1024 [MB] (30 MBps) [2024-11-20T08:36:20.286Z] Copying: 745/1024 [MB] (28 MBps) [2024-11-20T08:36:21.666Z] Copying: 777/1024 [MB] (32 MBps) [2024-11-20T08:36:22.234Z] Copying: 809/1024 [MB] (32 MBps) [2024-11-20T08:36:23.611Z] Copying: 844/1024 [MB] (34 MBps) [2024-11-20T08:36:24.548Z] Copying: 882/1024 [MB] (38 MBps) [2024-11-20T08:36:25.487Z] Copying: 916/1024 [MB] (34 MBps) [2024-11-20T08:36:26.425Z] Copying: 947/1024 [MB] (30 MBps) [2024-11-20T08:36:27.361Z] Copying: 978/1024 [MB] (30 MBps) [2024-11-20T08:36:27.930Z] Copying: 1006/1024 [MB] (27 MBps) [2024-11-20T08:36:28.499Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 08:36:28.273611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.273702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.938 [2024-11-20 08:36:28.273725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.938 [2024-11-20 08:36:28.273742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.273775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.938 [2024-11-20 08:36:28.278591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.278658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.938 [2024-11-20 08:36:28.278689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.793 ms 00:23:40.938 [2024-11-20 08:36:28.278706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.278978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.279007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.938 [2024-11-20 08:36:28.279025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:40.938 [2024-11-20 08:36:28.279041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.282813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.282856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.938 [2024-11-20 08:36:28.282875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:23:40.938 [2024-11-20 08:36:28.282891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.290580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.290652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.938 [2024-11-20 08:36:28.290672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.659 ms 00:23:40.938 [2024-11-20 08:36:28.290690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.333006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.333077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.938 [2024-11-20 08:36:28.333094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.270 ms 00:23:40.938 [2024-11-20 08:36:28.333104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.354690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.354756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.938 [2024-11-20 08:36:28.354773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.539 ms 00:23:40.938 [2024-11-20 08:36:28.354784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.354944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.354969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.938 [2024-11-20 08:36:28.354980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:40.938 [2024-11-20 08:36:28.355008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.394435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.394502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:40.938 [2024-11-20 08:36:28.394519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.469 ms 00:23:40.938 [2024-11-20 08:36:28.394529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.433313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.433393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:40.938 [2024-11-20 08:36:28.433410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.777 ms 00:23:40.938 [2024-11-20 08:36:28.433421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.938 [2024-11-20 08:36:28.472613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.938 [2024-11-20 08:36:28.472682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.938 [2024-11-20 08:36:28.472699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.188 ms 00:23:40.938 [2024-11-20 08:36:28.472709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.198 [2024-11-20 08:36:28.511352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.198 [2024-11-20 08:36:28.511419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.198 [2024-11-20 08:36:28.511435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.580 ms 00:23:41.198 [2024-11-20 08:36:28.511445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.198 [2024-11-20 08:36:28.511518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.198 [2024-11-20 08:36:28.511537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.511979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.198 [2024-11-20 08:36:28.512074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.199 [2024-11-20 08:36:28.512659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.199 [2024-11-20 08:36:28.512674] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 232784ec-97dd-4428-aed9-3dfdc45d7381 00:23:41.199 [2024-11-20 08:36:28.512685] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:41.199 [2024-11-20 08:36:28.512694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:41.199 [2024-11-20 08:36:28.512704] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:41.199 [2024-11-20 08:36:28.512715] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:41.199 [2024-11-20 08:36:28.512725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.201 [2024-11-20 08:36:28.512735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.201 [2024-11-20 08:36:28.512758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.201 [2024-11-20 08:36:28.512767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.201 [2024-11-20 08:36:28.512776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.201 [2024-11-20 08:36:28.512786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.201 [2024-11-20 08:36:28.512797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.201 [2024-11-20 08:36:28.512808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:23:41.201 [2024-11-20 08:36:28.512818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.201 [2024-11-20 08:36:28.532902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.201 [2024-11-20 08:36:28.532961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:41.201 [2024-11-20 08:36:28.532976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.059 ms 00:23:41.202 [2024-11-20 08:36:28.533001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.533500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.202 [2024-11-20 08:36:28.533535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:41.202 [2024-11-20 08:36:28.533547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:23:41.202 [2024-11-20 08:36:28.533562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.587315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.202 [2024-11-20 08:36:28.587382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.202 [2024-11-20 08:36:28.587397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.202 [2024-11-20 08:36:28.587408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.587487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.202 [2024-11-20 08:36:28.587499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.202 [2024-11-20 08:36:28.587510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.202 [2024-11-20 08:36:28.587527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.587628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.202 [2024-11-20 08:36:28.587642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.202 [2024-11-20 08:36:28.587653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.202 [2024-11-20 08:36:28.587662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.587679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.202 [2024-11-20 08:36:28.587690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.202 [2024-11-20 08:36:28.587700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.202 [2024-11-20 08:36:28.587710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.202 [2024-11-20 08:36:28.713709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.202 [2024-11-20 08:36:28.713781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.202 [2024-11-20 08:36:28.713797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.202 [2024-11-20 08:36:28.713809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.819728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.819803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.461 [2024-11-20 08:36:28.819819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.819829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.819955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.819968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.461 [2024-11-20 08:36:28.819979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.819989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.820088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.461 [2024-11-20 08:36:28.820100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.820110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.820252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.461 [2024-11-20 08:36:28.820263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.820274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.820324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:41.461 [2024-11-20 08:36:28.820335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.820345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.820401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.461 [2024-11-20 08:36:28.820411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.820422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.461 [2024-11-20 08:36:28.820478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.461 [2024-11-20 08:36:28.820489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.461 [2024-11-20 08:36:28.820499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.461 [2024-11-20 08:36:28.820619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.870 ms, result 0 00:23:42.421 00:23:42.422 00:23:42.422 08:36:29 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:44.327 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:44.327 08:36:31 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:44.327 [2024-11-20 08:36:31.769452] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:23:44.327 [2024-11-20 08:36:31.769606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77171 ] 00:23:44.586 [2024-11-20 08:36:31.968483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.586 [2024-11-20 08:36:32.088067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.155 [2024-11-20 08:36:32.451473] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.155 [2024-11-20 08:36:32.451544] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.155 [2024-11-20 08:36:32.612780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.155 [2024-11-20 08:36:32.612836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:45.155 [2024-11-20 08:36:32.612859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.155 [2024-11-20 08:36:32.612870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.612924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.612936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.156 [2024-11-20 08:36:32.612950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:45.156 [2024-11-20 08:36:32.612960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.612982] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:45.156 [2024-11-20 08:36:32.614005] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:45.156 [2024-11-20 08:36:32.614034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.614045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.156 [2024-11-20 08:36:32.614056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:23:45.156 [2024-11-20 08:36:32.614066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.615565] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:45.156 [2024-11-20 08:36:32.634703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.634751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:45.156 [2024-11-20 08:36:32.634770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.166 ms 00:23:45.156 [2024-11-20 08:36:32.634783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.634864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.634880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:45.156 [2024-11-20 08:36:32.634894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:45.156 [2024-11-20 08:36:32.634906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.642011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.642051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.156 [2024-11-20 08:36:32.642067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:23:45.156 [2024-11-20 08:36:32.642081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.642186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.642205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.156 [2024-11-20 08:36:32.642220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:45.156 [2024-11-20 08:36:32.642233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.642286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.642302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:45.156 [2024-11-20 08:36:32.642316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:45.156 [2024-11-20 08:36:32.642329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.642361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:45.156 [2024-11-20 08:36:32.647438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.647474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.156 [2024-11-20 08:36:32.647487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:23:45.156 [2024-11-20 08:36:32.647502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.647534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.647545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:45.156 [2024-11-20 08:36:32.647556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:45.156 [2024-11-20 08:36:32.647566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.647626] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:45.156 [2024-11-20 08:36:32.647650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:45.156 [2024-11-20 08:36:32.647686] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:45.156 [2024-11-20 08:36:32.647708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:45.156 [2024-11-20 08:36:32.647797] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:45.156 [2024-11-20 08:36:32.647810] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:45.156 [2024-11-20 08:36:32.647823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:45.156 [2024-11-20 08:36:32.647845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:45.156 [2024-11-20 08:36:32.647858] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:45.156 [2024-11-20 08:36:32.647869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:45.156 [2024-11-20 08:36:32.647878] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:45.156 [2024-11-20 08:36:32.647888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:45.156 [2024-11-20 08:36:32.647898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:45.156 [2024-11-20 08:36:32.647912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.647922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:45.156 [2024-11-20 08:36:32.647933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:23:45.156 [2024-11-20 08:36:32.647942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.648033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.156 [2024-11-20 08:36:32.648047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:45.156 [2024-11-20 08:36:32.648057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:45.156 [2024-11-20 08:36:32.648067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.156 [2024-11-20 08:36:32.648162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:45.156 [2024-11-20 08:36:32.648180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:45.156 [2024-11-20 08:36:32.648191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:45.156 [2024-11-20 08:36:32.648220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:45.156 [2024-11-20 08:36:32.648249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.156 [2024-11-20 08:36:32.648267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:45.156 [2024-11-20 08:36:32.648276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:45.156 [2024-11-20 08:36:32.648285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.156 [2024-11-20 08:36:32.648295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:45.156 [2024-11-20 08:36:32.648304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:45.156 [2024-11-20 08:36:32.648321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:45.156 [2024-11-20 08:36:32.648340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:45.156 [2024-11-20 08:36:32.648367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:45.156 [2024-11-20 08:36:32.648395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:45.156 [2024-11-20 08:36:32.648422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:45.156 [2024-11-20 08:36:32.648449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.156 [2024-11-20 08:36:32.648466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:45.156 [2024-11-20 08:36:32.648475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:45.156 [2024-11-20 08:36:32.648484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.156 [2024-11-20 08:36:32.648492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:45.156 [2024-11-20 08:36:32.648501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:45.156 [2024-11-20 08:36:32.648510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.157 [2024-11-20 08:36:32.648518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:45.157 [2024-11-20 08:36:32.648527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:45.157 [2024-11-20 08:36:32.648536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.157 [2024-11-20 08:36:32.648545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:45.157 [2024-11-20 08:36:32.648554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:45.157 [2024-11-20 08:36:32.648563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.157 [2024-11-20 08:36:32.648573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:45.157 [2024-11-20 08:36:32.648583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:45.157 [2024-11-20 08:36:32.648609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.157 [2024-11-20 08:36:32.648620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.157 [2024-11-20 08:36:32.648630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:45.157 [2024-11-20 08:36:32.648640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:45.157 [2024-11-20 08:36:32.648649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:45.157 [2024-11-20 08:36:32.648659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:45.157 [2024-11-20 08:36:32.648669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:45.157 [2024-11-20 08:36:32.648679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:45.157 [2024-11-20 08:36:32.648690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:45.157 [2024-11-20 08:36:32.648703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:45.157 [2024-11-20 08:36:32.648726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:45.157 [2024-11-20 08:36:32.648737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:45.157 [2024-11-20 08:36:32.648748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:45.157 [2024-11-20 08:36:32.648759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:45.157 [2024-11-20 08:36:32.648770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:45.157 [2024-11-20 08:36:32.648780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:45.157 [2024-11-20 08:36:32.648791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:45.157 [2024-11-20 08:36:32.648802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:45.157 [2024-11-20 08:36:32.648812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:45.157 [2024-11-20 08:36:32.648865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:45.157 [2024-11-20 08:36:32.648880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:45.157 [2024-11-20 08:36:32.648903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:45.157 [2024-11-20 08:36:32.648913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:45.157 [2024-11-20 08:36:32.648924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:45.157 [2024-11-20 08:36:32.648935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.157 [2024-11-20 08:36:32.648946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:45.157 [2024-11-20 08:36:32.648968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:23:45.157 [2024-11-20 08:36:32.648977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.157 [2024-11-20 08:36:32.690442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.157 [2024-11-20 08:36:32.690497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.157 [2024-11-20 08:36:32.690514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.455 ms 00:23:45.157 [2024-11-20 08:36:32.690526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.157 [2024-11-20 08:36:32.690636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.157 [2024-11-20 08:36:32.690648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:45.157 [2024-11-20 08:36:32.690660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:45.157 [2024-11-20 08:36:32.690671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.748702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.748875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.417 [2024-11-20 08:36:32.748900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.045 ms 00:23:45.417 [2024-11-20 08:36:32.748910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.748972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.748983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.417 [2024-11-20 08:36:32.749015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:45.417 [2024-11-20 08:36:32.749038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.749533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.749547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.417 [2024-11-20 08:36:32.749558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:23:45.417 [2024-11-20 08:36:32.749568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.749686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.749700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.417 [2024-11-20 08:36:32.749711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:45.417 [2024-11-20 08:36:32.749727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.769100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.769144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.417 [2024-11-20 08:36:32.769163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.383 ms 00:23:45.417 [2024-11-20 08:36:32.769174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.788552] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:45.417 [2024-11-20 08:36:32.788598] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:45.417 [2024-11-20 08:36:32.788614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.788625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:45.417 [2024-11-20 08:36:32.788637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.339 ms 00:23:45.417 [2024-11-20 08:36:32.788648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.818798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.818868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:45.417 [2024-11-20 08:36:32.818884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.147 ms 00:23:45.417 [2024-11-20 08:36:32.818895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.837247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.837292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:45.417 [2024-11-20 08:36:32.837306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:23:45.417 [2024-11-20 08:36:32.837316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.855549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.855596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:45.417 [2024-11-20 08:36:32.855611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.217 ms 00:23:45.417 [2024-11-20 08:36:32.855622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.856457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.856490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:45.417 [2024-11-20 08:36:32.856503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:23:45.417 [2024-11-20 08:36:32.856518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.944290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.944362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:45.417 [2024-11-20 08:36:32.944386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.888 ms 00:23:45.417 [2024-11-20 08:36:32.944397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.417 [2024-11-20 08:36:32.956651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:45.417 [2024-11-20 08:36:32.959961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.417 [2024-11-20 08:36:32.960145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:45.418 [2024-11-20 08:36:32.960171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.521 ms 00:23:45.418 [2024-11-20 08:36:32.960183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.418 [2024-11-20 08:36:32.960298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.418 [2024-11-20 08:36:32.960312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:45.418 [2024-11-20 08:36:32.960323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:45.418 [2024-11-20 08:36:32.960337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.418 [2024-11-20 08:36:32.960426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.418 [2024-11-20 08:36:32.960439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:45.418 [2024-11-20 08:36:32.960450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:45.418 [2024-11-20 08:36:32.960459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.418 [2024-11-20 08:36:32.960482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.418 [2024-11-20 08:36:32.960494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:45.418 [2024-11-20 08:36:32.960504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:45.418 [2024-11-20 08:36:32.960514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.418 [2024-11-20 08:36:32.960544] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:45.418 [2024-11-20 08:36:32.960559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.418 [2024-11-20 08:36:32.960568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:45.418 [2024-11-20 08:36:32.960579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:45.418 [2024-11-20 08:36:32.960589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.677 [2024-11-20 08:36:32.997235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.677 [2024-11-20 08:36:32.997292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:45.677 [2024-11-20 08:36:32.997309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.681 ms 00:23:45.677 [2024-11-20 08:36:32.997326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.677 [2024-11-20 08:36:32.997411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.677 [2024-11-20 08:36:32.997424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:45.677 [2024-11-20 08:36:32.997435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:45.677 [2024-11-20 08:36:32.997445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.677 [2024-11-20 08:36:32.998797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.190 ms, result 0 00:23:46.614  [2024-11-20T08:36:35.113Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T08:36:36.051Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-20T08:36:37.431Z] Copying: 85/1024 [MB] (29 MBps) [2024-11-20T08:36:38.369Z] Copying: 113/1024 [MB] (28 MBps) [2024-11-20T08:36:39.304Z] Copying: 141/1024 [MB] (27 MBps) [2024-11-20T08:36:40.240Z] Copying: 168/1024 [MB] (26 MBps) [2024-11-20T08:36:41.178Z] Copying: 194/1024 [MB] (26 MBps) [2024-11-20T08:36:42.116Z] Copying: 223/1024 [MB] (28 MBps) [2024-11-20T08:36:43.056Z] Copying: 253/1024 [MB] (30 MBps) [2024-11-20T08:36:44.434Z] Copying: 283/1024 [MB] (29 MBps) [2024-11-20T08:36:45.001Z] Copying: 313/1024 [MB] (29 MBps) [2024-11-20T08:36:46.380Z] Copying: 343/1024 [MB] (29 MBps) [2024-11-20T08:36:47.318Z] Copying: 372/1024 [MB] (29 MBps) [2024-11-20T08:36:48.254Z] Copying: 403/1024 [MB] (30 MBps) [2024-11-20T08:36:49.189Z] Copying: 433/1024 [MB] (30 MBps) [2024-11-20T08:36:50.124Z] Copying: 467/1024 [MB] (33 MBps) [2024-11-20T08:36:51.153Z] Copying: 498/1024 [MB] (31 MBps) [2024-11-20T08:36:52.092Z] Copying: 531/1024 [MB] (32 MBps) [2024-11-20T08:36:53.029Z] Copying: 564/1024 [MB] (32 MBps) [2024-11-20T08:36:54.407Z] Copying: 592/1024 [MB] (27 MBps) [2024-11-20T08:36:55.343Z] Copying: 620/1024 [MB] (28 MBps) [2024-11-20T08:36:56.279Z] Copying: 650/1024 [MB] (29 MBps) [2024-11-20T08:36:57.216Z] Copying: 678/1024 [MB] (28 MBps) [2024-11-20T08:36:58.171Z] Copying: 705/1024 [MB] (26 MBps) [2024-11-20T08:36:59.108Z] Copying: 734/1024 [MB] (28 MBps) [2024-11-20T08:37:00.044Z] Copying: 762/1024 [MB] (28 MBps) [2024-11-20T08:37:00.999Z] Copying: 793/1024 [MB] (30 MBps) [2024-11-20T08:37:02.405Z] Copying: 823/1024 [MB] (30 MBps) [2024-11-20T08:37:02.976Z] Copying: 857/1024 [MB] (34 MBps) [2024-11-20T08:37:04.353Z] Copying: 891/1024 [MB] (33 MBps) [2024-11-20T08:37:05.287Z] Copying: 921/1024 [MB] (30 MBps) [2024-11-20T08:37:06.221Z] Copying: 950/1024 [MB] (28 MBps) [2024-11-20T08:37:07.157Z] Copying: 981/1024 [MB] (30 MBps) [2024-11-20T08:37:08.092Z] Copying: 1009/1024 [MB] (28 MBps) [2024-11-20T08:37:08.350Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-20T08:37:08.350Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 08:37:08.274829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.789 [2024-11-20 08:37:08.274922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.789 [2024-11-20 08:37:08.274939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:20.789 [2024-11-20 08:37:08.274966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.789 [2024-11-20 08:37:08.276908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.789 [2024-11-20 08:37:08.284368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.789 [2024-11-20 08:37:08.284410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.789 [2024-11-20 08:37:08.284425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.419 ms 00:24:20.789 [2024-11-20 08:37:08.284436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.789 [2024-11-20 08:37:08.295062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.789 [2024-11-20 08:37:08.295114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.789 [2024-11-20 08:37:08.295130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.833 ms 00:24:20.789 [2024-11-20 08:37:08.295141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.789 [2024-11-20 08:37:08.322090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.789 [2024-11-20 08:37:08.322173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.789 [2024-11-20 08:37:08.322192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.959 ms 00:24:20.789 [2024-11-20 08:37:08.322204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.789 [2024-11-20 08:37:08.327301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.789 [2024-11-20 08:37:08.327355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.789 [2024-11-20 08:37:08.327368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:24:20.789 [2024-11-20 08:37:08.327380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.365363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.365666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:21.072 [2024-11-20 08:37:08.365691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.959 ms 00:24:21.072 [2024-11-20 08:37:08.365702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.387435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.387499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:21.072 [2024-11-20 08:37:08.387516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.711 ms 00:24:21.072 [2024-11-20 08:37:08.387526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.479836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.479914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:21.072 [2024-11-20 08:37:08.479934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.389 ms 00:24:21.072 [2024-11-20 08:37:08.479946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.520430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.520496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:21.072 [2024-11-20 08:37:08.520513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.510 ms 00:24:21.072 [2024-11-20 08:37:08.520524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.559642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.559714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:21.072 [2024-11-20 08:37:08.559729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.116 ms 00:24:21.072 [2024-11-20 08:37:08.559740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.072 [2024-11-20 08:37:08.599054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.072 [2024-11-20 08:37:08.599115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:21.072 [2024-11-20 08:37:08.599133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.316 ms 00:24:21.072 [2024-11-20 08:37:08.599144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.331 [2024-11-20 08:37:08.638784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.331 [2024-11-20 08:37:08.638850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:21.331 [2024-11-20 08:37:08.638866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.579 ms 00:24:21.331 [2024-11-20 08:37:08.638877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.331 [2024-11-20 08:37:08.638936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:21.331 [2024-11-20 08:37:08.638970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110848 / 261120 wr_cnt: 1 state: open 00:24:21.331 [2024-11-20 08:37:08.639006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:21.331 [2024-11-20 08:37:08.639416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.639997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:21.332 [2024-11-20 08:37:08.640104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:21.332 [2024-11-20 08:37:08.640115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 232784ec-97dd-4428-aed9-3dfdc45d7381 00:24:21.332 [2024-11-20 08:37:08.640126] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110848 00:24:21.332 [2024-11-20 08:37:08.640137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111808 00:24:21.332 [2024-11-20 08:37:08.640147] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110848 00:24:21.332 [2024-11-20 08:37:08.640158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:24:21.332 [2024-11-20 08:37:08.640167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:21.332 [2024-11-20 08:37:08.640185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:21.332 [2024-11-20 08:37:08.640207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:21.332 [2024-11-20 08:37:08.640216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:21.332 [2024-11-20 08:37:08.640226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:21.332 [2024-11-20 08:37:08.640236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.332 [2024-11-20 08:37:08.640247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:21.332 [2024-11-20 08:37:08.640259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:24:21.332 [2024-11-20 08:37:08.640269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.660464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.332 [2024-11-20 08:37:08.660689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:21.332 [2024-11-20 08:37:08.660713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.172 ms 00:24:21.332 [2024-11-20 08:37:08.660731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.661320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.332 [2024-11-20 08:37:08.661334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:21.332 [2024-11-20 08:37:08.661346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:24:21.332 [2024-11-20 08:37:08.661357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.714887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.332 [2024-11-20 08:37:08.714952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.332 [2024-11-20 08:37:08.714974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.332 [2024-11-20 08:37:08.715001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.715116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.332 [2024-11-20 08:37:08.715132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.332 [2024-11-20 08:37:08.715144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.332 [2024-11-20 08:37:08.715154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.715249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.332 [2024-11-20 08:37:08.715262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.332 [2024-11-20 08:37:08.715274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.332 [2024-11-20 08:37:08.715288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.332 [2024-11-20 08:37:08.715306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.333 [2024-11-20 08:37:08.715316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.333 [2024-11-20 08:37:08.715327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.333 [2024-11-20 08:37:08.715337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.333 [2024-11-20 08:37:08.844032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.333 [2024-11-20 08:37:08.844105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.333 [2024-11-20 08:37:08.844131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.333 [2024-11-20 08:37:08.844141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.591 [2024-11-20 08:37:08.948143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.591 [2024-11-20 08:37:08.948272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.591 [2024-11-20 08:37:08.948357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.591 [2024-11-20 08:37:08.948499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:21.591 [2024-11-20 08:37:08.948579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.591 [2024-11-20 08:37:08.948672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.591 [2024-11-20 08:37:08.948741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.591 [2024-11-20 08:37:08.948751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.591 [2024-11-20 08:37:08.948760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.591 [2024-11-20 08:37:08.948894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 675.884 ms, result 0 00:24:22.987 00:24:22.987 00:24:22.987 08:37:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:22.987 [2024-11-20 08:37:10.500119] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:24:22.987 [2024-11-20 08:37:10.500272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77556 ] 00:24:23.247 [2024-11-20 08:37:10.684593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.506 [2024-11-20 08:37:10.807773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.765 [2024-11-20 08:37:11.164007] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.766 [2024-11-20 08:37:11.164079] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.025 [2024-11-20 08:37:11.327640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.327928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:24.025 [2024-11-20 08:37:11.327961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.025 [2024-11-20 08:37:11.327972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.328066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.328080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.025 [2024-11-20 08:37:11.328095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:24.025 [2024-11-20 08:37:11.328106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.328129] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:24.025 [2024-11-20 08:37:11.329064] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:24.025 [2024-11-20 08:37:11.329086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.329097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.025 [2024-11-20 08:37:11.329109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:24:24.025 [2024-11-20 08:37:11.329119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.330619] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:24.025 [2024-11-20 08:37:11.350112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.350356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:24.025 [2024-11-20 08:37:11.350382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.517 ms 00:24:24.025 [2024-11-20 08:37:11.350393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.350491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.350507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:24.025 [2024-11-20 08:37:11.350519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:24.025 [2024-11-20 08:37:11.350529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.357788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.358007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.025 [2024-11-20 08:37:11.358030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.179 ms 00:24:24.025 [2024-11-20 08:37:11.358042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.025 [2024-11-20 08:37:11.358145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.025 [2024-11-20 08:37:11.358159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.025 [2024-11-20 08:37:11.358169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:24.026 [2024-11-20 08:37:11.358179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.358230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-11-20 08:37:11.358242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:24.026 [2024-11-20 08:37:11.358253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:24.026 [2024-11-20 08:37:11.358263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.358291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.026 [2024-11-20 08:37:11.363281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-11-20 08:37:11.363317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.026 [2024-11-20 08:37:11.363330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:24:24.026 [2024-11-20 08:37:11.363344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.363376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-11-20 08:37:11.363397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:24.026 [2024-11-20 08:37:11.363408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:24.026 [2024-11-20 08:37:11.363418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.363475] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:24.026 [2024-11-20 08:37:11.363500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:24.026 [2024-11-20 08:37:11.363535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:24.026 [2024-11-20 08:37:11.363556] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:24.026 [2024-11-20 08:37:11.363648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:24.026 [2024-11-20 08:37:11.363661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:24.026 [2024-11-20 08:37:11.363674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:24.026 [2024-11-20 08:37:11.363687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:24.026 [2024-11-20 08:37:11.363699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:24.026 [2024-11-20 08:37:11.363711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:24.026 [2024-11-20 08:37:11.363721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:24.026 [2024-11-20 08:37:11.363731] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:24.026 [2024-11-20 08:37:11.363741] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:24.026 [2024-11-20 08:37:11.363756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-11-20 08:37:11.363766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:24.026 [2024-11-20 08:37:11.363777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:24.026 [2024-11-20 08:37:11.363787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.363863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-11-20 08:37:11.363874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:24.026 [2024-11-20 08:37:11.363884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:24.026 [2024-11-20 08:37:11.363894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-11-20 08:37:11.364010] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:24.026 [2024-11-20 08:37:11.364029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:24.026 [2024-11-20 08:37:11.364040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:24.026 [2024-11-20 08:37:11.364071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:24.026 [2024-11-20 08:37:11.364101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.026 [2024-11-20 08:37:11.364120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:24.026 [2024-11-20 08:37:11.364130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:24.026 [2024-11-20 08:37:11.364139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.026 [2024-11-20 08:37:11.364148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:24.026 [2024-11-20 08:37:11.364158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:24.026 [2024-11-20 08:37:11.364176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:24.026 [2024-11-20 08:37:11.364195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:24.026 [2024-11-20 08:37:11.364224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:24.026 [2024-11-20 08:37:11.364252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:24.026 [2024-11-20 08:37:11.364279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:24.026 [2024-11-20 08:37:11.364307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:24.026 [2024-11-20 08:37:11.364334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.026 [2024-11-20 08:37:11.364351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:24.026 [2024-11-20 08:37:11.364360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:24.026 [2024-11-20 08:37:11.364369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.026 [2024-11-20 08:37:11.364378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:24.026 [2024-11-20 08:37:11.364387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:24.026 [2024-11-20 08:37:11.364395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:24.026 [2024-11-20 08:37:11.364413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:24.026 [2024-11-20 08:37:11.364424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364433] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:24.026 [2024-11-20 08:37:11.364443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:24.026 [2024-11-20 08:37:11.364452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-11-20 08:37:11.364471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:24.026 [2024-11-20 08:37:11.364481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:24.026 [2024-11-20 08:37:11.364490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:24.026 [2024-11-20 08:37:11.364499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:24.026 [2024-11-20 08:37:11.364509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:24.026 [2024-11-20 08:37:11.364518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:24.026 [2024-11-20 08:37:11.364529] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:24.026 [2024-11-20 08:37:11.364542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.026 [2024-11-20 08:37:11.364560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:24.026 [2024-11-20 08:37:11.364571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:24.026 [2024-11-20 08:37:11.364581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:24.026 [2024-11-20 08:37:11.364591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:24.026 [2024-11-20 08:37:11.364601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:24.026 [2024-11-20 08:37:11.364611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:24.026 [2024-11-20 08:37:11.364622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:24.026 [2024-11-20 08:37:11.364632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:24.026 [2024-11-20 08:37:11.364642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:24.026 [2024-11-20 08:37:11.364652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:24.027 [2024-11-20 08:37:11.364702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:24.027 [2024-11-20 08:37:11.364717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:24.027 [2024-11-20 08:37:11.364738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:24.027 [2024-11-20 08:37:11.364748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:24.027 [2024-11-20 08:37:11.364759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:24.027 [2024-11-20 08:37:11.364770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.364779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:24.027 [2024-11-20 08:37:11.364789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:24:24.027 [2024-11-20 08:37:11.364799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.406685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.406741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.027 [2024-11-20 08:37:11.406758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.901 ms 00:24:24.027 [2024-11-20 08:37:11.406770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.406878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.406890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.027 [2024-11-20 08:37:11.406902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:24.027 [2024-11-20 08:37:11.406912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.468235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.468296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.027 [2024-11-20 08:37:11.468311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.319 ms 00:24:24.027 [2024-11-20 08:37:11.468322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.468382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.468394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.027 [2024-11-20 08:37:11.468406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:24.027 [2024-11-20 08:37:11.468421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.468920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.468935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.027 [2024-11-20 08:37:11.468947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:24:24.027 [2024-11-20 08:37:11.468956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.469107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.469123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.027 [2024-11-20 08:37:11.469134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:24.027 [2024-11-20 08:37:11.469150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.488830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.488888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.027 [2024-11-20 08:37:11.488908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.686 ms 00:24:24.027 [2024-11-20 08:37:11.488919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.508718] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:24.027 [2024-11-20 08:37:11.508784] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:24.027 [2024-11-20 08:37:11.508802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.508814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:24.027 [2024-11-20 08:37:11.508827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.738 ms 00:24:24.027 [2024-11-20 08:37:11.508837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.539888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.539983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:24.027 [2024-11-20 08:37:11.540014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.023 ms 00:24:24.027 [2024-11-20 08:37:11.540025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.560272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.560359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:24.027 [2024-11-20 08:37:11.560376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.186 ms 00:24:24.027 [2024-11-20 08:37:11.560387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.580910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.581194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:24.027 [2024-11-20 08:37:11.581236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.479 ms 00:24:24.027 [2024-11-20 08:37:11.581247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.027 [2024-11-20 08:37:11.582172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-11-20 08:37:11.582199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.027 [2024-11-20 08:37:11.582212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:24:24.027 [2024-11-20 08:37:11.582226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.286 [2024-11-20 08:37:11.673083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.286 [2024-11-20 08:37:11.673160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:24.286 [2024-11-20 08:37:11.673189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.974 ms 00:24:24.286 [2024-11-20 08:37:11.673200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.286 [2024-11-20 08:37:11.687698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.286 [2024-11-20 08:37:11.691377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.286 [2024-11-20 08:37:11.691425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.286 [2024-11-20 08:37:11.691441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:24:24.286 [2024-11-20 08:37:11.691452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.286 [2024-11-20 08:37:11.691572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.286 [2024-11-20 08:37:11.691586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:24.286 [2024-11-20 08:37:11.691598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:24.286 [2024-11-20 08:37:11.691613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.286 [2024-11-20 08:37:11.693239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.286 [2024-11-20 08:37:11.693281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.286 [2024-11-20 08:37:11.693294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:24:24.286 [2024-11-20 08:37:11.693304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.287 [2024-11-20 08:37:11.693349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.287 [2024-11-20 08:37:11.693362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.287 [2024-11-20 08:37:11.693373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:24.287 [2024-11-20 08:37:11.693384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.287 [2024-11-20 08:37:11.693423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:24.287 [2024-11-20 08:37:11.693440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.287 [2024-11-20 08:37:11.693452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:24.287 [2024-11-20 08:37:11.693463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:24.287 [2024-11-20 08:37:11.693473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.287 [2024-11-20 08:37:11.734916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.287 [2024-11-20 08:37:11.735006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.287 [2024-11-20 08:37:11.735026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.486 ms 00:24:24.287 [2024-11-20 08:37:11.735049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.287 [2024-11-20 08:37:11.735171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.287 [2024-11-20 08:37:11.735189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.287 [2024-11-20 08:37:11.735202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:24.287 [2024-11-20 08:37:11.735213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.287 [2024-11-20 08:37:11.736496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.022 ms, result 0 00:24:25.666  [2024-11-20T08:37:14.166Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T08:37:15.106Z] Copying: 58/1024 [MB] (31 MBps) [2024-11-20T08:37:16.055Z] Copying: 89/1024 [MB] (31 MBps) [2024-11-20T08:37:16.992Z] Copying: 122/1024 [MB] (32 MBps) [2024-11-20T08:37:18.368Z] Copying: 153/1024 [MB] (31 MBps) [2024-11-20T08:37:19.306Z] Copying: 183/1024 [MB] (29 MBps) [2024-11-20T08:37:20.243Z] Copying: 214/1024 [MB] (31 MBps) [2024-11-20T08:37:21.217Z] Copying: 248/1024 [MB] (34 MBps) [2024-11-20T08:37:22.154Z] Copying: 283/1024 [MB] (35 MBps) [2024-11-20T08:37:23.090Z] Copying: 318/1024 [MB] (34 MBps) [2024-11-20T08:37:24.029Z] Copying: 350/1024 [MB] (32 MBps) [2024-11-20T08:37:24.967Z] Copying: 383/1024 [MB] (33 MBps) [2024-11-20T08:37:26.389Z] Copying: 416/1024 [MB] (32 MBps) [2024-11-20T08:37:27.325Z] Copying: 449/1024 [MB] (32 MBps) [2024-11-20T08:37:28.260Z] Copying: 479/1024 [MB] (30 MBps) [2024-11-20T08:37:29.199Z] Copying: 511/1024 [MB] (32 MBps) [2024-11-20T08:37:30.133Z] Copying: 542/1024 [MB] (30 MBps) [2024-11-20T08:37:31.069Z] Copying: 575/1024 [MB] (33 MBps) [2024-11-20T08:37:32.005Z] Copying: 607/1024 [MB] (31 MBps) [2024-11-20T08:37:33.386Z] Copying: 639/1024 [MB] (32 MBps) [2024-11-20T08:37:33.953Z] Copying: 674/1024 [MB] (34 MBps) [2024-11-20T08:37:35.327Z] Copying: 709/1024 [MB] (35 MBps) [2024-11-20T08:37:36.262Z] Copying: 743/1024 [MB] (33 MBps) [2024-11-20T08:37:37.197Z] Copying: 774/1024 [MB] (30 MBps) [2024-11-20T08:37:38.132Z] Copying: 807/1024 [MB] (33 MBps) [2024-11-20T08:37:39.066Z] Copying: 839/1024 [MB] (31 MBps) [2024-11-20T08:37:40.000Z] Copying: 871/1024 [MB] (31 MBps) [2024-11-20T08:37:40.936Z] Copying: 904/1024 [MB] (32 MBps) [2024-11-20T08:37:41.936Z] Copying: 937/1024 [MB] (33 MBps) [2024-11-20T08:37:43.311Z] Copying: 969/1024 [MB] (31 MBps) [2024-11-20T08:37:43.877Z] Copying: 1000/1024 [MB] (30 MBps) [2024-11-20T08:37:44.136Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 08:37:44.056620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.057029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.575 [2024-11-20 08:37:44.057069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:56.575 [2024-11-20 08:37:44.057087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.575 [2024-11-20 08:37:44.057159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.575 [2024-11-20 08:37:44.062202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.062264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.575 [2024-11-20 08:37:44.062287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:24:56.575 [2024-11-20 08:37:44.062304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.575 [2024-11-20 08:37:44.062614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.062921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:56.575 [2024-11-20 08:37:44.062949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:56.575 [2024-11-20 08:37:44.062967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.575 [2024-11-20 08:37:44.067979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.068050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:56.575 [2024-11-20 08:37:44.068071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.970 ms 00:24:56.575 [2024-11-20 08:37:44.068089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.575 [2024-11-20 08:37:44.075615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.075678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:56.575 [2024-11-20 08:37:44.075695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.479 ms 00:24:56.575 [2024-11-20 08:37:44.075708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.575 [2024-11-20 08:37:44.120253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.575 [2024-11-20 08:37:44.120618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:56.575 [2024-11-20 08:37:44.120650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.523 ms 00:24:56.575 [2024-11-20 08:37:44.120663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.835 [2024-11-20 08:37:44.146970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.835 [2024-11-20 08:37:44.147099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:56.835 [2024-11-20 08:37:44.147119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.252 ms 00:24:56.835 [2024-11-20 08:37:44.147131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.835 [2024-11-20 08:37:44.260551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.835 [2024-11-20 08:37:44.260697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:56.835 [2024-11-20 08:37:44.260720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.481 ms 00:24:56.835 [2024-11-20 08:37:44.260735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.835 [2024-11-20 08:37:44.307834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.835 [2024-11-20 08:37:44.307934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:56.835 [2024-11-20 08:37:44.307954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.144 ms 00:24:56.835 [2024-11-20 08:37:44.307965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.835 [2024-11-20 08:37:44.353470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.835 [2024-11-20 08:37:44.353574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:56.835 [2024-11-20 08:37:44.353622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.454 ms 00:24:56.835 [2024-11-20 08:37:44.353634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.095 [2024-11-20 08:37:44.399319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.095 [2024-11-20 08:37:44.399416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:57.095 [2024-11-20 08:37:44.399435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.657 ms 00:24:57.095 [2024-11-20 08:37:44.399447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.095 [2024-11-20 08:37:44.442090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.095 [2024-11-20 08:37:44.442200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:57.095 [2024-11-20 08:37:44.442220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.528 ms 00:24:57.095 [2024-11-20 08:37:44.442232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.095 [2024-11-20 08:37:44.442331] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:57.095 [2024-11-20 08:37:44.442355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:57.095 [2024-11-20 08:37:44.442371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:57.095 [2024-11-20 08:37:44.442706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.442976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:57.096 [2024-11-20 08:37:44.443487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:57.097 [2024-11-20 08:37:44.443499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:57.097 [2024-11-20 08:37:44.443510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:57.097 [2024-11-20 08:37:44.443531] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:57.097 [2024-11-20 08:37:44.443542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 232784ec-97dd-4428-aed9-3dfdc45d7381 00:24:57.097 [2024-11-20 08:37:44.443554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:57.097 [2024-11-20 08:37:44.443565] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21184 00:24:57.097 [2024-11-20 08:37:44.443576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20224 00:24:57.097 [2024-11-20 08:37:44.443587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0475 00:24:57.097 [2024-11-20 08:37:44.443599] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:57.097 [2024-11-20 08:37:44.443620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:57.097 [2024-11-20 08:37:44.443630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:57.097 [2024-11-20 08:37:44.443657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:57.097 [2024-11-20 08:37:44.443667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:57.097 [2024-11-20 08:37:44.443678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.097 [2024-11-20 08:37:44.443690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:57.097 [2024-11-20 08:37:44.443702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:24:57.097 [2024-11-20 08:37:44.443713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.466208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.097 [2024-11-20 08:37:44.466297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:57.097 [2024-11-20 08:37:44.466316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.461 ms 00:24:57.097 [2024-11-20 08:37:44.466344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.467027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.097 [2024-11-20 08:37:44.467046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:57.097 [2024-11-20 08:37:44.467059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:24:57.097 [2024-11-20 08:37:44.467095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.523399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.097 [2024-11-20 08:37:44.523491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.097 [2024-11-20 08:37:44.523515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.097 [2024-11-20 08:37:44.523526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.523626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.097 [2024-11-20 08:37:44.523639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.097 [2024-11-20 08:37:44.523651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.097 [2024-11-20 08:37:44.523662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.523775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.097 [2024-11-20 08:37:44.523791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.097 [2024-11-20 08:37:44.523803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.097 [2024-11-20 08:37:44.523820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.097 [2024-11-20 08:37:44.523840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.097 [2024-11-20 08:37:44.523853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.097 [2024-11-20 08:37:44.523864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.097 [2024-11-20 08:37:44.523876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.661944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.662060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.356 [2024-11-20 08:37:44.662094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.662115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.356 [2024-11-20 08:37:44.779242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.356 [2024-11-20 08:37:44.779424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.356 [2024-11-20 08:37:44.779521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.356 [2024-11-20 08:37:44.779686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.356 [2024-11-20 08:37:44.779768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.356 [2024-11-20 08:37:44.779852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.779919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.356 [2024-11-20 08:37:44.779931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.356 [2024-11-20 08:37:44.779943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.356 [2024-11-20 08:37:44.779954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.356 [2024-11-20 08:37:44.780129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 724.643 ms, result 0 00:24:58.733 00:24:58.733 00:24:58.733 08:37:45 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:00.635 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:00.635 08:37:47 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:00.635 08:37:47 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:00.635 08:37:47 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:00.635 08:37:47 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:00.635 Process with pid 76051 is not found 00:25:00.635 Remove shared memory files 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76051 00:25:00.635 08:37:48 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' -z 76051 ']' 00:25:00.635 08:37:48 ftl.ftl_restore -- common/autotest_common.sh@961 -- # kill -0 76051 00:25:00.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (76051) - No such process 00:25:00.635 08:37:48 ftl.ftl_restore -- common/autotest_common.sh@984 -- # echo 'Process with pid 76051 is not found' 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:00.635 08:37:48 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:00.635 ************************************ 00:25:00.635 END TEST ftl_restore 00:25:00.635 ************************************ 00:25:00.635 00:25:00.635 real 3m0.161s 00:25:00.635 user 2m46.998s 00:25:00.635 sys 0m14.723s 00:25:00.635 08:37:48 ftl.ftl_restore -- common/autotest_common.sh@1133 -- # xtrace_disable 00:25:00.635 08:37:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:00.636 08:37:48 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:00.636 08:37:48 ftl -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:25:00.636 08:37:48 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:25:00.636 08:37:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:00.636 ************************************ 00:25:00.636 START TEST ftl_dirty_shutdown 00:25:00.636 ************************************ 00:25:00.636 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:00.895 * Looking for test storage... 00:25:00.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1638 -- # lcov --version 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:25:00.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.895 --rc genhtml_branch_coverage=1 00:25:00.895 --rc genhtml_function_coverage=1 00:25:00.895 --rc genhtml_legend=1 00:25:00.895 --rc geninfo_all_blocks=1 00:25:00.895 --rc geninfo_unexecuted_blocks=1 00:25:00.895 00:25:00.895 ' 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:25:00.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.895 --rc genhtml_branch_coverage=1 00:25:00.895 --rc genhtml_function_coverage=1 00:25:00.895 --rc genhtml_legend=1 00:25:00.895 --rc geninfo_all_blocks=1 00:25:00.895 --rc geninfo_unexecuted_blocks=1 00:25:00.895 00:25:00.895 ' 00:25:00.895 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:25:00.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.895 --rc genhtml_branch_coverage=1 00:25:00.895 --rc genhtml_function_coverage=1 00:25:00.895 --rc genhtml_legend=1 00:25:00.895 --rc geninfo_all_blocks=1 00:25:00.895 --rc geninfo_unexecuted_blocks=1 00:25:00.895 00:25:00.896 ' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:25:00.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.896 --rc genhtml_branch_coverage=1 00:25:00.896 --rc genhtml_function_coverage=1 00:25:00.896 --rc genhtml_legend=1 00:25:00.896 --rc geninfo_all_blocks=1 00:25:00.896 --rc geninfo_unexecuted_blocks=1 00:25:00.896 00:25:00.896 ' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78010 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78010 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # '[' -z 78010 ']' 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@843 -- # local max_retries=100 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@847 -- # xtrace_disable 00:25:00.896 08:37:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:01.155 [2024-11-20 08:37:48.469816] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:25:01.155 [2024-11-20 08:37:48.470279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78010 ] 00:25:01.155 [2024-11-20 08:37:48.663881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.414 [2024-11-20 08:37:48.824557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # return 0 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:02.787 08:37:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=nvme0n1 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:25:02.787 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:25:03.046 { 00:25:03.046 "name": "nvme0n1", 00:25:03.046 "aliases": [ 00:25:03.046 "a34e209b-94fd-477b-8b85-4181b12fa802" 00:25:03.046 ], 00:25:03.046 "product_name": "NVMe disk", 00:25:03.046 "block_size": 4096, 00:25:03.046 "num_blocks": 1310720, 00:25:03.046 "uuid": "a34e209b-94fd-477b-8b85-4181b12fa802", 00:25:03.046 "numa_id": -1, 00:25:03.046 "assigned_rate_limits": { 00:25:03.046 "rw_ios_per_sec": 0, 00:25:03.046 "rw_mbytes_per_sec": 0, 00:25:03.046 "r_mbytes_per_sec": 0, 00:25:03.046 "w_mbytes_per_sec": 0 00:25:03.046 }, 00:25:03.046 "claimed": true, 00:25:03.046 "claim_type": "read_many_write_one", 00:25:03.046 "zoned": false, 00:25:03.046 "supported_io_types": { 00:25:03.046 "read": true, 00:25:03.046 "write": true, 00:25:03.046 "unmap": true, 00:25:03.046 "flush": true, 00:25:03.046 "reset": true, 00:25:03.046 "nvme_admin": true, 00:25:03.046 "nvme_io": true, 00:25:03.046 "nvme_io_md": false, 00:25:03.046 "write_zeroes": true, 00:25:03.046 "zcopy": false, 00:25:03.046 "get_zone_info": false, 00:25:03.046 "zone_management": false, 00:25:03.046 "zone_append": false, 00:25:03.046 "compare": true, 00:25:03.046 "compare_and_write": false, 00:25:03.046 "abort": true, 00:25:03.046 "seek_hole": false, 00:25:03.046 "seek_data": false, 00:25:03.046 "copy": true, 00:25:03.046 "nvme_iov_md": false 00:25:03.046 }, 00:25:03.046 "driver_specific": { 00:25:03.046 "nvme": [ 00:25:03.046 { 00:25:03.046 "pci_address": "0000:00:11.0", 00:25:03.046 "trid": { 00:25:03.046 "trtype": "PCIe", 00:25:03.046 "traddr": "0000:00:11.0" 00:25:03.046 }, 00:25:03.046 "ctrlr_data": { 00:25:03.046 "cntlid": 0, 00:25:03.046 "vendor_id": "0x1b36", 00:25:03.046 "model_number": "QEMU NVMe Ctrl", 00:25:03.046 "serial_number": "12341", 00:25:03.046 "firmware_revision": "8.0.0", 00:25:03.046 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:03.046 "oacs": { 00:25:03.046 "security": 0, 00:25:03.046 "format": 1, 00:25:03.046 "firmware": 0, 00:25:03.046 "ns_manage": 1 00:25:03.046 }, 00:25:03.046 "multi_ctrlr": false, 00:25:03.046 "ana_reporting": false 00:25:03.046 }, 00:25:03.046 "vs": { 00:25:03.046 "nvme_version": "1.4" 00:25:03.046 }, 00:25:03.046 "ns_data": { 00:25:03.046 "id": 1, 00:25:03.046 "can_share": false 00:25:03.046 } 00:25:03.046 } 00:25:03.046 ], 00:25:03.046 "mp_policy": "active_passive" 00:25:03.046 } 00:25:03.046 } 00:25:03.046 ]' 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # nb=1310720 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # echo 5120 00:25:03.046 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=66275fd3-28a7-43fa-b883-443eba234eed 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:03.304 08:37:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66275fd3-28a7-43fa-b883-443eba234eed 00:25:03.871 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:03.871 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=257c5e22-6d3d-45b1-9abe-5c3bf9afd283 00:25:03.871 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 257c5e22-6d3d-45b1-9abe-5c3bf9afd283 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:25:04.130 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.389 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:25:04.389 { 00:25:04.389 "name": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:04.389 "aliases": [ 00:25:04.389 "lvs/nvme0n1p0" 00:25:04.389 ], 00:25:04.389 "product_name": "Logical Volume", 00:25:04.389 "block_size": 4096, 00:25:04.389 "num_blocks": 26476544, 00:25:04.389 "uuid": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:04.389 "assigned_rate_limits": { 00:25:04.389 "rw_ios_per_sec": 0, 00:25:04.389 "rw_mbytes_per_sec": 0, 00:25:04.389 "r_mbytes_per_sec": 0, 00:25:04.389 "w_mbytes_per_sec": 0 00:25:04.389 }, 00:25:04.389 "claimed": false, 00:25:04.389 "zoned": false, 00:25:04.389 "supported_io_types": { 00:25:04.389 "read": true, 00:25:04.389 "write": true, 00:25:04.389 "unmap": true, 00:25:04.389 "flush": false, 00:25:04.389 "reset": true, 00:25:04.389 "nvme_admin": false, 00:25:04.389 "nvme_io": false, 00:25:04.389 "nvme_io_md": false, 00:25:04.389 "write_zeroes": true, 00:25:04.389 "zcopy": false, 00:25:04.389 "get_zone_info": false, 00:25:04.389 "zone_management": false, 00:25:04.389 "zone_append": false, 00:25:04.389 "compare": false, 00:25:04.389 "compare_and_write": false, 00:25:04.389 "abort": false, 00:25:04.389 "seek_hole": true, 00:25:04.389 "seek_data": true, 00:25:04.389 "copy": false, 00:25:04.389 "nvme_iov_md": false 00:25:04.389 }, 00:25:04.389 "driver_specific": { 00:25:04.389 "lvol": { 00:25:04.389 "lvol_store_uuid": "257c5e22-6d3d-45b1-9abe-5c3bf9afd283", 00:25:04.389 "base_bdev": "nvme0n1", 00:25:04.389 "thin_provision": true, 00:25:04.389 "num_allocated_clusters": 0, 00:25:04.389 "snapshot": false, 00:25:04.389 "clone": false, 00:25:04.389 "esnap_clone": false 00:25:04.389 } 00:25:04.389 } 00:25:04.389 } 00:25:04.389 ]' 00:25:04.389 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:25:04.389 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:25:04.389 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # nb=26476544 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # echo 103424 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:04.648 08:37:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:25:04.907 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:25:05.167 { 00:25:05.167 "name": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:05.167 "aliases": [ 00:25:05.167 "lvs/nvme0n1p0" 00:25:05.167 ], 00:25:05.167 "product_name": "Logical Volume", 00:25:05.167 "block_size": 4096, 00:25:05.167 "num_blocks": 26476544, 00:25:05.167 "uuid": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:05.167 "assigned_rate_limits": { 00:25:05.167 "rw_ios_per_sec": 0, 00:25:05.167 "rw_mbytes_per_sec": 0, 00:25:05.167 "r_mbytes_per_sec": 0, 00:25:05.167 "w_mbytes_per_sec": 0 00:25:05.167 }, 00:25:05.167 "claimed": false, 00:25:05.167 "zoned": false, 00:25:05.167 "supported_io_types": { 00:25:05.167 "read": true, 00:25:05.167 "write": true, 00:25:05.167 "unmap": true, 00:25:05.167 "flush": false, 00:25:05.167 "reset": true, 00:25:05.167 "nvme_admin": false, 00:25:05.167 "nvme_io": false, 00:25:05.167 "nvme_io_md": false, 00:25:05.167 "write_zeroes": true, 00:25:05.167 "zcopy": false, 00:25:05.167 "get_zone_info": false, 00:25:05.167 "zone_management": false, 00:25:05.167 "zone_append": false, 00:25:05.167 "compare": false, 00:25:05.167 "compare_and_write": false, 00:25:05.167 "abort": false, 00:25:05.167 "seek_hole": true, 00:25:05.167 "seek_data": true, 00:25:05.167 "copy": false, 00:25:05.167 "nvme_iov_md": false 00:25:05.167 }, 00:25:05.167 "driver_specific": { 00:25:05.167 "lvol": { 00:25:05.167 "lvol_store_uuid": "257c5e22-6d3d-45b1-9abe-5c3bf9afd283", 00:25:05.167 "base_bdev": "nvme0n1", 00:25:05.167 "thin_provision": true, 00:25:05.167 "num_allocated_clusters": 0, 00:25:05.167 "snapshot": false, 00:25:05.167 "clone": false, 00:25:05.167 "esnap_clone": false 00:25:05.167 } 00:25:05.167 } 00:25:05.167 } 00:25:05.167 ]' 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # nb=26476544 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # echo 103424 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:05.167 08:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:25:05.426 08:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da3cf10b-b01e-46a9-8192-7e14a2bfde5f 00:25:05.685 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:25:05.685 { 00:25:05.685 "name": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:05.685 "aliases": [ 00:25:05.685 "lvs/nvme0n1p0" 00:25:05.685 ], 00:25:05.685 "product_name": "Logical Volume", 00:25:05.685 "block_size": 4096, 00:25:05.685 "num_blocks": 26476544, 00:25:05.685 "uuid": "da3cf10b-b01e-46a9-8192-7e14a2bfde5f", 00:25:05.685 "assigned_rate_limits": { 00:25:05.685 "rw_ios_per_sec": 0, 00:25:05.685 "rw_mbytes_per_sec": 0, 00:25:05.685 "r_mbytes_per_sec": 0, 00:25:05.685 "w_mbytes_per_sec": 0 00:25:05.685 }, 00:25:05.685 "claimed": false, 00:25:05.685 "zoned": false, 00:25:05.685 "supported_io_types": { 00:25:05.685 "read": true, 00:25:05.685 "write": true, 00:25:05.685 "unmap": true, 00:25:05.685 "flush": false, 00:25:05.685 "reset": true, 00:25:05.685 "nvme_admin": false, 00:25:05.685 "nvme_io": false, 00:25:05.685 "nvme_io_md": false, 00:25:05.685 "write_zeroes": true, 00:25:05.685 "zcopy": false, 00:25:05.685 "get_zone_info": false, 00:25:05.685 "zone_management": false, 00:25:05.685 "zone_append": false, 00:25:05.685 "compare": false, 00:25:05.685 "compare_and_write": false, 00:25:05.685 "abort": false, 00:25:05.685 "seek_hole": true, 00:25:05.685 "seek_data": true, 00:25:05.685 "copy": false, 00:25:05.685 "nvme_iov_md": false 00:25:05.685 }, 00:25:05.685 "driver_specific": { 00:25:05.685 "lvol": { 00:25:05.685 "lvol_store_uuid": "257c5e22-6d3d-45b1-9abe-5c3bf9afd283", 00:25:05.685 "base_bdev": "nvme0n1", 00:25:05.685 "thin_provision": true, 00:25:05.685 "num_allocated_clusters": 0, 00:25:05.685 "snapshot": false, 00:25:05.685 "clone": false, 00:25:05.685 "esnap_clone": false 00:25:05.685 } 00:25:05.685 } 00:25:05.685 } 00:25:05.685 ]' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # nb=26476544 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=103424 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # echo 103424 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d da3cf10b-b01e-46a9-8192-7e14a2bfde5f --l2p_dram_limit 10' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:05.944 08:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d da3cf10b-b01e-46a9-8192-7e14a2bfde5f --l2p_dram_limit 10 -c nvc0n1p0 00:25:06.202 [2024-11-20 08:37:53.574669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.202 [2024-11-20 08:37:53.575095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:06.202 [2024-11-20 08:37:53.575142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:06.202 [2024-11-20 08:37:53.575157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.575276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.575293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.203 [2024-11-20 08:37:53.575312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:06.203 [2024-11-20 08:37:53.575326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.575389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:06.203 [2024-11-20 08:37:53.576530] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:06.203 [2024-11-20 08:37:53.576567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.576580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.203 [2024-11-20 08:37:53.576596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:25:06.203 [2024-11-20 08:37:53.576607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.576703] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 405049cc-0b62-4405-9897-35956d2f78a3 00:25:06.203 [2024-11-20 08:37:53.579317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.579367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:06.203 [2024-11-20 08:37:53.579382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:06.203 [2024-11-20 08:37:53.579399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.593904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.594288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.203 [2024-11-20 08:37:53.594324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.440 ms 00:25:06.203 [2024-11-20 08:37:53.594341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.594518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.594540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.203 [2024-11-20 08:37:53.594555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:06.203 [2024-11-20 08:37:53.594578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.594691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.594712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:06.203 [2024-11-20 08:37:53.594732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:06.203 [2024-11-20 08:37:53.594755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.594790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:06.203 [2024-11-20 08:37:53.601367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.601614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.203 [2024-11-20 08:37:53.601643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.595 ms 00:25:06.203 [2024-11-20 08:37:53.601655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.601722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.601734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:06.203 [2024-11-20 08:37:53.601749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:06.203 [2024-11-20 08:37:53.601761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.601823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:06.203 [2024-11-20 08:37:53.601972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:06.203 [2024-11-20 08:37:53.602012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:06.203 [2024-11-20 08:37:53.602029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:06.203 [2024-11-20 08:37:53.602047] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602075] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:06.203 [2024-11-20 08:37:53.602087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:06.203 [2024-11-20 08:37:53.602113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:06.203 [2024-11-20 08:37:53.602124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:06.203 [2024-11-20 08:37:53.602139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.602150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:06.203 [2024-11-20 08:37:53.602181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:06.203 [2024-11-20 08:37:53.602210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.602297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.203 [2024-11-20 08:37:53.602309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:06.203 [2024-11-20 08:37:53.602324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:06.203 [2024-11-20 08:37:53.602336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.203 [2024-11-20 08:37:53.602449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:06.203 [2024-11-20 08:37:53.602465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:06.203 [2024-11-20 08:37:53.602480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:06.203 [2024-11-20 08:37:53.602519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:06.203 [2024-11-20 08:37:53.602556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.203 [2024-11-20 08:37:53.602579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:06.203 [2024-11-20 08:37:53.602590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:06.203 [2024-11-20 08:37:53.602603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.203 [2024-11-20 08:37:53.602615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:06.203 [2024-11-20 08:37:53.602629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:06.203 [2024-11-20 08:37:53.602639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:06.203 [2024-11-20 08:37:53.602667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:06.203 [2024-11-20 08:37:53.602702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:06.203 [2024-11-20 08:37:53.602734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:06.203 [2024-11-20 08:37:53.602770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:06.203 [2024-11-20 08:37:53.602803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.203 [2024-11-20 08:37:53.602826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:06.203 [2024-11-20 08:37:53.602841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:06.203 [2024-11-20 08:37:53.602851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.203 [2024-11-20 08:37:53.602864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:06.203 [2024-11-20 08:37:53.602874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:06.204 [2024-11-20 08:37:53.602887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.204 [2024-11-20 08:37:53.602896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:06.204 [2024-11-20 08:37:53.602909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:06.204 [2024-11-20 08:37:53.602918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.204 [2024-11-20 08:37:53.602931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:06.204 [2024-11-20 08:37:53.602941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:06.204 [2024-11-20 08:37:53.602954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.204 [2024-11-20 08:37:53.602964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:06.204 [2024-11-20 08:37:53.602979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:06.204 [2024-11-20 08:37:53.602993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.204 [2024-11-20 08:37:53.603019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.204 [2024-11-20 08:37:53.603032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:06.204 [2024-11-20 08:37:53.603049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:06.204 [2024-11-20 08:37:53.603059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:06.204 [2024-11-20 08:37:53.603073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:06.204 [2024-11-20 08:37:53.603084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:06.204 [2024-11-20 08:37:53.603099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:06.204 [2024-11-20 08:37:53.603116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:06.204 [2024-11-20 08:37:53.603134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:06.204 [2024-11-20 08:37:53.603165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:06.204 [2024-11-20 08:37:53.603178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:06.204 [2024-11-20 08:37:53.603193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:06.204 [2024-11-20 08:37:53.603204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:06.204 [2024-11-20 08:37:53.603230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:06.204 [2024-11-20 08:37:53.603242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:06.204 [2024-11-20 08:37:53.603256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:06.204 [2024-11-20 08:37:53.603280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:06.204 [2024-11-20 08:37:53.603297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:06.204 [2024-11-20 08:37:53.603357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:06.204 [2024-11-20 08:37:53.603371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:06.204 [2024-11-20 08:37:53.603397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:06.204 [2024-11-20 08:37:53.603408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:06.204 [2024-11-20 08:37:53.603421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:06.204 [2024-11-20 08:37:53.603432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.204 [2024-11-20 08:37:53.603446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:06.204 [2024-11-20 08:37:53.603457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:25:06.204 [2024-11-20 08:37:53.603472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.204 [2024-11-20 08:37:53.603524] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:06.204 [2024-11-20 08:37:53.603543] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:09.506 [2024-11-20 08:37:56.977681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.506 [2024-11-20 08:37:56.977777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:09.506 [2024-11-20 08:37:56.977796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3379.631 ms 00:25:09.506 [2024-11-20 08:37:56.977811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.506 [2024-11-20 08:37:57.022342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.506 [2024-11-20 08:37:57.022771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.506 [2024-11-20 08:37:57.022801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.190 ms 00:25:09.506 [2024-11-20 08:37:57.022818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.506 [2024-11-20 08:37:57.023020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.506 [2024-11-20 08:37:57.023041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.506 [2024-11-20 08:37:57.023053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:09.506 [2024-11-20 08:37:57.023070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.075042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.075118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.766 [2024-11-20 08:37:57.075136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.980 ms 00:25:09.766 [2024-11-20 08:37:57.075151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.075211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.075233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.766 [2024-11-20 08:37:57.075246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.766 [2024-11-20 08:37:57.075260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.075796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.075824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.766 [2024-11-20 08:37:57.075845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:25:09.766 [2024-11-20 08:37:57.075860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.075976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.076017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.766 [2024-11-20 08:37:57.076033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:09.766 [2024-11-20 08:37:57.076050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.099295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.099369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.766 [2024-11-20 08:37:57.099387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.257 ms 00:25:09.766 [2024-11-20 08:37:57.099423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.114705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.766 [2024-11-20 08:37:57.118672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.118738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.766 [2024-11-20 08:37:57.118758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.129 ms 00:25:09.766 [2024-11-20 08:37:57.118770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.220609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.220699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:09.766 [2024-11-20 08:37:57.220722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.923 ms 00:25:09.766 [2024-11-20 08:37:57.220734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.221043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.221064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.766 [2024-11-20 08:37:57.221083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:25:09.766 [2024-11-20 08:37:57.221095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.263700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.263854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:09.766 [2024-11-20 08:37:57.263894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.556 ms 00:25:09.766 [2024-11-20 08:37:57.263913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.305015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.305124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:09.766 [2024-11-20 08:37:57.305156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.981 ms 00:25:09.766 [2024-11-20 08:37:57.305174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.766 [2024-11-20 08:37:57.306077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.766 [2024-11-20 08:37:57.306325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.766 [2024-11-20 08:37:57.306368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:25:09.766 [2024-11-20 08:37:57.306386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.418209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.418315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:10.025 [2024-11-20 08:37:57.418348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.840 ms 00:25:10.025 [2024-11-20 08:37:57.418365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.463020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.463112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:10.025 [2024-11-20 08:37:57.463134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.494 ms 00:25:10.025 [2024-11-20 08:37:57.463146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.506232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.506316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:10.025 [2024-11-20 08:37:57.506338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.035 ms 00:25:10.025 [2024-11-20 08:37:57.506350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.548968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.549272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.025 [2024-11-20 08:37:57.549304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.560 ms 00:25:10.025 [2024-11-20 08:37:57.549316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.549402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.549416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.025 [2024-11-20 08:37:57.549435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:10.025 [2024-11-20 08:37:57.549446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.549590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-11-20 08:37:57.549604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.025 [2024-11-20 08:37:57.549623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:10.025 [2024-11-20 08:37:57.549644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-11-20 08:37:57.550906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3982.155 ms, result 0 00:25:10.025 { 00:25:10.025 "name": "ftl0", 00:25:10.025 "uuid": "405049cc-0b62-4405-9897-35956d2f78a3" 00:25:10.025 } 00:25:10.284 08:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:10.284 08:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:10.284 08:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:10.284 08:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:10.543 08:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:10.543 /dev/nbd0 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # local i 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:25:10.543 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # break 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:10.801 1+0 records in 00:25:10.801 1+0 records out 00:25:10.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00185192 s, 2.2 MB/s 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # size=4096 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@896 -- # return 0 00:25:10.801 08:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:10.801 [2024-11-20 08:37:58.220789] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:25:10.801 [2024-11-20 08:37:58.220930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78163 ] 00:25:11.059 [2024-11-20 08:37:58.407278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.059 [2024-11-20 08:37:58.539967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.435  [2024-11-20T08:38:00.934Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-20T08:38:02.313Z] Copying: 344/1024 [MB] (170 MBps) [2024-11-20T08:38:03.251Z] Copying: 518/1024 [MB] (174 MBps) [2024-11-20T08:38:04.190Z] Copying: 688/1024 [MB] (169 MBps) [2024-11-20T08:38:05.187Z] Copying: 856/1024 [MB] (167 MBps) [2024-11-20T08:38:06.121Z] Copying: 1024/1024 [MB] (average 171 MBps) 00:25:18.560 00:25:18.560 08:38:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:20.460 08:38:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:20.717 [2024-11-20 08:38:08.035940] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:25:20.717 [2024-11-20 08:38:08.036126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78263 ] 00:25:20.717 [2024-11-20 08:38:08.228830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.975 [2024-11-20 08:38:08.385853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.413  [2024-11-20T08:38:10.915Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-20T08:38:11.854Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-20T08:38:12.790Z] Copying: 57/1024 [MB] (19 MBps) [2024-11-20T08:38:13.725Z] Copying: 77/1024 [MB] (19 MBps) [2024-11-20T08:38:15.101Z] Copying: 96/1024 [MB] (19 MBps) [2024-11-20T08:38:16.035Z] Copying: 116/1024 [MB] (19 MBps) [2024-11-20T08:38:16.970Z] Copying: 134/1024 [MB] (18 MBps) [2024-11-20T08:38:17.904Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-20T08:38:18.837Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-20T08:38:19.776Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-20T08:38:21.152Z] Copying: 210/1024 [MB] (18 MBps) [2024-11-20T08:38:21.720Z] Copying: 229/1024 [MB] (19 MBps) [2024-11-20T08:38:23.097Z] Copying: 249/1024 [MB] (19 MBps) [2024-11-20T08:38:24.035Z] Copying: 268/1024 [MB] (19 MBps) [2024-11-20T08:38:24.981Z] Copying: 287/1024 [MB] (19 MBps) [2024-11-20T08:38:25.915Z] Copying: 305/1024 [MB] (18 MBps) [2024-11-20T08:38:26.849Z] Copying: 324/1024 [MB] (18 MBps) [2024-11-20T08:38:27.783Z] Copying: 343/1024 [MB] (18 MBps) [2024-11-20T08:38:28.717Z] Copying: 361/1024 [MB] (18 MBps) [2024-11-20T08:38:30.091Z] Copying: 380/1024 [MB] (18 MBps) [2024-11-20T08:38:31.026Z] Copying: 398/1024 [MB] (17 MBps) [2024-11-20T08:38:31.962Z] Copying: 416/1024 [MB] (18 MBps) [2024-11-20T08:38:32.896Z] Copying: 436/1024 [MB] (19 MBps) [2024-11-20T08:38:33.831Z] Copying: 455/1024 [MB] (19 MBps) [2024-11-20T08:38:34.766Z] Copying: 476/1024 [MB] (21 MBps) [2024-11-20T08:38:35.700Z] Copying: 498/1024 [MB] (22 MBps) [2024-11-20T08:38:37.116Z] Copying: 519/1024 [MB] (20 MBps) [2024-11-20T08:38:37.705Z] Copying: 539/1024 [MB] (19 MBps) [2024-11-20T08:38:39.081Z] Copying: 559/1024 [MB] (20 MBps) [2024-11-20T08:38:40.017Z] Copying: 577/1024 [MB] (18 MBps) [2024-11-20T08:38:40.953Z] Copying: 598/1024 [MB] (20 MBps) [2024-11-20T08:38:41.889Z] Copying: 617/1024 [MB] (19 MBps) [2024-11-20T08:38:42.824Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-20T08:38:43.760Z] Copying: 654/1024 [MB] (19 MBps) [2024-11-20T08:38:44.697Z] Copying: 673/1024 [MB] (19 MBps) [2024-11-20T08:38:46.074Z] Copying: 692/1024 [MB] (19 MBps) [2024-11-20T08:38:47.010Z] Copying: 712/1024 [MB] (20 MBps) [2024-11-20T08:38:47.946Z] Copying: 732/1024 [MB] (19 MBps) [2024-11-20T08:38:48.882Z] Copying: 751/1024 [MB] (19 MBps) [2024-11-20T08:38:49.815Z] Copying: 771/1024 [MB] (20 MBps) [2024-11-20T08:38:50.750Z] Copying: 790/1024 [MB] (19 MBps) [2024-11-20T08:38:51.728Z] Copying: 810/1024 [MB] (19 MBps) [2024-11-20T08:38:52.669Z] Copying: 828/1024 [MB] (18 MBps) [2024-11-20T08:38:54.046Z] Copying: 848/1024 [MB] (19 MBps) [2024-11-20T08:38:54.982Z] Copying: 868/1024 [MB] (20 MBps) [2024-11-20T08:38:55.972Z] Copying: 887/1024 [MB] (18 MBps) [2024-11-20T08:38:56.930Z] Copying: 907/1024 [MB] (20 MBps) [2024-11-20T08:38:57.868Z] Copying: 926/1024 [MB] (18 MBps) [2024-11-20T08:38:58.804Z] Copying: 944/1024 [MB] (18 MBps) [2024-11-20T08:38:59.753Z] Copying: 962/1024 [MB] (18 MBps) [2024-11-20T08:39:00.729Z] Copying: 981/1024 [MB] (18 MBps) [2024-11-20T08:39:01.666Z] Copying: 1001/1024 [MB] (19 MBps) [2024-11-20T08:39:01.925Z] Copying: 1020/1024 [MB] (19 MBps) [2024-11-20T08:39:03.305Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:26:15.744 00:26:15.744 08:39:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:15.744 08:39:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:15.744 08:39:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:16.004 [2024-11-20 08:39:03.512360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.004 [2024-11-20 08:39:03.512442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.004 [2024-11-20 08:39:03.512461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.004 [2024-11-20 08:39:03.512475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.004 [2024-11-20 08:39:03.512504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.004 [2024-11-20 08:39:03.517010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.004 [2024-11-20 08:39:03.517073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.004 [2024-11-20 08:39:03.517092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:26:16.004 [2024-11-20 08:39:03.517104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.004 [2024-11-20 08:39:03.519279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.004 [2024-11-20 08:39:03.519341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.004 [2024-11-20 08:39:03.519360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:26:16.004 [2024-11-20 08:39:03.519371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.004 [2024-11-20 08:39:03.533256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.004 [2024-11-20 08:39:03.533356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.004 [2024-11-20 08:39:03.533375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.866 ms 00:26:16.004 [2024-11-20 08:39:03.533386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.004 [2024-11-20 08:39:03.538821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.004 [2024-11-20 08:39:03.538891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.004 [2024-11-20 08:39:03.538924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:26:16.004 [2024-11-20 08:39:03.538939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.580402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.580483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.265 [2024-11-20 08:39:03.580504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.415 ms 00:26:16.265 [2024-11-20 08:39:03.580516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.605565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.605646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.265 [2024-11-20 08:39:03.605667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.980 ms 00:26:16.265 [2024-11-20 08:39:03.605699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.605944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.605960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.265 [2024-11-20 08:39:03.605975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:26:16.265 [2024-11-20 08:39:03.605985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.648293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.648375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.265 [2024-11-20 08:39:03.648397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.318 ms 00:26:16.265 [2024-11-20 08:39:03.648408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.690768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.690853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.265 [2024-11-20 08:39:03.690874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.332 ms 00:26:16.265 [2024-11-20 08:39:03.690885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.732893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.732977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.265 [2024-11-20 08:39:03.733006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.971 ms 00:26:16.265 [2024-11-20 08:39:03.733018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.774597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.265 [2024-11-20 08:39:03.774681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.265 [2024-11-20 08:39:03.774703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.446 ms 00:26:16.265 [2024-11-20 08:39:03.774714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.265 [2024-11-20 08:39:03.774824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.265 [2024-11-20 08:39:03.774845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.774979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.265 [2024-11-20 08:39:03.775618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.775983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.266 [2024-11-20 08:39:03.776182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.266 [2024-11-20 08:39:03.776195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 405049cc-0b62-4405-9897-35956d2f78a3 00:26:16.266 [2024-11-20 08:39:03.776207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:16.266 [2024-11-20 08:39:03.776223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:16.266 [2024-11-20 08:39:03.776233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:16.266 [2024-11-20 08:39:03.776251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:16.266 [2024-11-20 08:39:03.776261] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.266 [2024-11-20 08:39:03.776275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.266 [2024-11-20 08:39:03.776285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.266 [2024-11-20 08:39:03.776298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.266 [2024-11-20 08:39:03.776307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.266 [2024-11-20 08:39:03.776321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.266 [2024-11-20 08:39:03.776332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.266 [2024-11-20 08:39:03.776346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:26:16.266 [2024-11-20 08:39:03.776356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.266 [2024-11-20 08:39:03.797799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.266 [2024-11-20 08:39:03.797872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.266 [2024-11-20 08:39:03.797895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.378 ms 00:26:16.266 [2024-11-20 08:39:03.797906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.266 [2024-11-20 08:39:03.798517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.266 [2024-11-20 08:39:03.798538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.266 [2024-11-20 08:39:03.798552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:26:16.266 [2024-11-20 08:39:03.798564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.526 [2024-11-20 08:39:03.870379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.526 [2024-11-20 08:39:03.870453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.526 [2024-11-20 08:39:03.870473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.526 [2024-11-20 08:39:03.870484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.526 [2024-11-20 08:39:03.870573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.526 [2024-11-20 08:39:03.870586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.526 [2024-11-20 08:39:03.870600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.526 [2024-11-20 08:39:03.870611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.526 [2024-11-20 08:39:03.870732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.526 [2024-11-20 08:39:03.870747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.526 [2024-11-20 08:39:03.870764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.526 [2024-11-20 08:39:03.870775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.526 [2024-11-20 08:39:03.870802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.526 [2024-11-20 08:39:03.870814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.526 [2024-11-20 08:39:03.870828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.526 [2024-11-20 08:39:03.870839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.526 [2024-11-20 08:39:04.003871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.526 [2024-11-20 08:39:04.003955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.526 [2024-11-20 08:39:04.003974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.526 [2024-11-20 08:39:04.003985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.109255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.109335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.788 [2024-11-20 08:39:04.109354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.109366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.109500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.109514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.788 [2024-11-20 08:39:04.109528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.109542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.109614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.109626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.788 [2024-11-20 08:39:04.109641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.109653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.109804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.109819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.788 [2024-11-20 08:39:04.109845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.109856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.109910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.109924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.788 [2024-11-20 08:39:04.109937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.109948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.110028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.110042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.788 [2024-11-20 08:39:04.110056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.110067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.110140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.788 [2024-11-20 08:39:04.110154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.788 [2024-11-20 08:39:04.110169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.788 [2024-11-20 08:39:04.110185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.788 [2024-11-20 08:39:04.110332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 598.904 ms, result 0 00:26:16.788 true 00:26:16.788 08:39:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78010 00:26:16.788 08:39:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78010 00:26:16.788 08:39:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:16.788 [2024-11-20 08:39:04.245388] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:26:16.788 [2024-11-20 08:39:04.245545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78833 ] 00:26:17.048 [2024-11-20 08:39:04.432963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.048 [2024-11-20 08:39:04.581137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.438  [2024-11-20T08:39:06.935Z] Copying: 179/1024 [MB] (179 MBps) [2024-11-20T08:39:08.313Z] Copying: 360/1024 [MB] (181 MBps) [2024-11-20T08:39:09.252Z] Copying: 548/1024 [MB] (187 MBps) [2024-11-20T08:39:10.237Z] Copying: 736/1024 [MB] (187 MBps) [2024-11-20T08:39:10.501Z] Copying: 921/1024 [MB] (185 MBps) [2024-11-20T08:39:11.878Z] Copying: 1024/1024 [MB] (average 184 MBps) 00:26:24.317 00:26:24.317 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78010 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:24.317 08:39:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:24.317 [2024-11-20 08:39:11.733187] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:26:24.317 [2024-11-20 08:39:11.733324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78912 ] 00:26:24.576 [2024-11-20 08:39:11.919113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.576 [2024-11-20 08:39:12.045505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.144 [2024-11-20 08:39:12.431875] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.144 [2024-11-20 08:39:12.431966] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.144 [2024-11-20 08:39:12.498475] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:25.144 [2024-11-20 08:39:12.498775] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:25.144 [2024-11-20 08:39:12.499067] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:25.404 [2024-11-20 08:39:12.722334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.722405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:25.404 [2024-11-20 08:39:12.722429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:25.404 [2024-11-20 08:39:12.722444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.722544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.722565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.404 [2024-11-20 08:39:12.722580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:25.404 [2024-11-20 08:39:12.722596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.722633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:25.404 [2024-11-20 08:39:12.724402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:25.404 [2024-11-20 08:39:12.724446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.724466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.404 [2024-11-20 08:39:12.724483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:26:25.404 [2024-11-20 08:39:12.724499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.726174] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:25.404 [2024-11-20 08:39:12.746667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.746762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:25.404 [2024-11-20 08:39:12.746787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.522 ms 00:26:25.404 [2024-11-20 08:39:12.746803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.746948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.746972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:25.404 [2024-11-20 08:39:12.747005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:25.404 [2024-11-20 08:39:12.747024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.755131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.755188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.404 [2024-11-20 08:39:12.755212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.964 ms 00:26:25.404 [2024-11-20 08:39:12.755228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.404 [2024-11-20 08:39:12.755367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.404 [2024-11-20 08:39:12.755390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.404 [2024-11-20 08:39:12.755410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:25.404 [2024-11-20 08:39:12.755428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.755515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.405 [2024-11-20 08:39:12.755541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:25.405 [2024-11-20 08:39:12.755559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:25.405 [2024-11-20 08:39:12.755575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.755622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:25.405 [2024-11-20 08:39:12.761074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.405 [2024-11-20 08:39:12.761118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.405 [2024-11-20 08:39:12.761140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.472 ms 00:26:25.405 [2024-11-20 08:39:12.761154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.761232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.405 [2024-11-20 08:39:12.761251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:25.405 [2024-11-20 08:39:12.761267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:25.405 [2024-11-20 08:39:12.761283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.761379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:25.405 [2024-11-20 08:39:12.761421] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:25.405 [2024-11-20 08:39:12.761471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:25.405 [2024-11-20 08:39:12.761498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:25.405 [2024-11-20 08:39:12.761613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:25.405 [2024-11-20 08:39:12.761634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:25.405 [2024-11-20 08:39:12.761654] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:25.405 [2024-11-20 08:39:12.761677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:25.405 [2024-11-20 08:39:12.761701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:25.405 [2024-11-20 08:39:12.761718] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:25.405 [2024-11-20 08:39:12.761734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:25.405 [2024-11-20 08:39:12.761750] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:25.405 [2024-11-20 08:39:12.761766] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:25.405 [2024-11-20 08:39:12.761783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.405 [2024-11-20 08:39:12.761799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:25.405 [2024-11-20 08:39:12.761816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:26:25.405 [2024-11-20 08:39:12.761832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.761933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.405 [2024-11-20 08:39:12.761957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:25.405 [2024-11-20 08:39:12.761975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:25.405 [2024-11-20 08:39:12.762005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.405 [2024-11-20 08:39:12.762145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:25.405 [2024-11-20 08:39:12.762170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:25.405 [2024-11-20 08:39:12.762188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:25.405 [2024-11-20 08:39:12.762237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:25.405 [2024-11-20 08:39:12.762282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.405 [2024-11-20 08:39:12.762311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:25.405 [2024-11-20 08:39:12.762339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:25.405 [2024-11-20 08:39:12.762355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.405 [2024-11-20 08:39:12.762370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:25.405 [2024-11-20 08:39:12.762386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:25.405 [2024-11-20 08:39:12.762401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:25.405 [2024-11-20 08:39:12.762431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:25.405 [2024-11-20 08:39:12.762476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:25.405 [2024-11-20 08:39:12.762523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:25.405 [2024-11-20 08:39:12.762565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:25.405 [2024-11-20 08:39:12.762608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:25.405 [2024-11-20 08:39:12.762654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.405 [2024-11-20 08:39:12.762684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:25.405 [2024-11-20 08:39:12.762699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:25.405 [2024-11-20 08:39:12.762713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.405 [2024-11-20 08:39:12.762728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:25.405 [2024-11-20 08:39:12.762744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:25.405 [2024-11-20 08:39:12.762759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:25.405 [2024-11-20 08:39:12.762790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:25.405 [2024-11-20 08:39:12.762805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762820] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:25.405 [2024-11-20 08:39:12.762836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:25.405 [2024-11-20 08:39:12.762852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.405 [2024-11-20 08:39:12.762874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.405 [2024-11-20 08:39:12.762890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:25.405 [2024-11-20 08:39:12.762906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:25.405 [2024-11-20 08:39:12.762922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:25.405 [2024-11-20 08:39:12.762937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:25.405 [2024-11-20 08:39:12.762951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:25.406 [2024-11-20 08:39:12.762968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:25.406 [2024-11-20 08:39:12.762997] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:25.406 [2024-11-20 08:39:12.763021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:25.406 [2024-11-20 08:39:12.763056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:25.406 [2024-11-20 08:39:12.763073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:25.406 [2024-11-20 08:39:12.763091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:25.406 [2024-11-20 08:39:12.763107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:25.406 [2024-11-20 08:39:12.763123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:25.406 [2024-11-20 08:39:12.763140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:25.406 [2024-11-20 08:39:12.763157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:25.406 [2024-11-20 08:39:12.763175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:25.406 [2024-11-20 08:39:12.763191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:25.406 [2024-11-20 08:39:12.763275] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:25.406 [2024-11-20 08:39:12.763293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.406 [2024-11-20 08:39:12.763330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:25.406 [2024-11-20 08:39:12.763346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:25.406 [2024-11-20 08:39:12.763363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:25.406 [2024-11-20 08:39:12.763382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.763399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:25.406 [2024-11-20 08:39:12.763417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:26:25.406 [2024-11-20 08:39:12.763445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.810106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.810169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.406 [2024-11-20 08:39:12.810212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.651 ms 00:26:25.406 [2024-11-20 08:39:12.810228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.810361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.810381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:25.406 [2024-11-20 08:39:12.810399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:25.406 [2024-11-20 08:39:12.810418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.877097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.877160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.406 [2024-11-20 08:39:12.877188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.671 ms 00:26:25.406 [2024-11-20 08:39:12.877204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.877285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.877303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.406 [2024-11-20 08:39:12.877320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:25.406 [2024-11-20 08:39:12.877333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.877990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.878048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.406 [2024-11-20 08:39:12.878070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:26:25.406 [2024-11-20 08:39:12.878108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.878311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.878343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.406 [2024-11-20 08:39:12.878363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:25.406 [2024-11-20 08:39:12.878380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.902982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.903046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.406 [2024-11-20 08:39:12.903065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.599 ms 00:26:25.406 [2024-11-20 08:39:12.903077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.924714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:25.406 [2024-11-20 08:39:12.924785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:25.406 [2024-11-20 08:39:12.924803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.924814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:25.406 [2024-11-20 08:39:12.924827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.603 ms 00:26:25.406 [2024-11-20 08:39:12.924837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.406 [2024-11-20 08:39:12.956618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.406 [2024-11-20 08:39:12.956689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:25.406 [2024-11-20 08:39:12.956724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.751 ms 00:26:25.406 [2024-11-20 08:39:12.956737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:12.977623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:12.977721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:25.665 [2024-11-20 08:39:12.977756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.839 ms 00:26:25.665 [2024-11-20 08:39:12.977768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:12.998336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:12.998416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:25.665 [2024-11-20 08:39:12.998434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.527 ms 00:26:25.665 [2024-11-20 08:39:12.998444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:12.999338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:12.999369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:25.665 [2024-11-20 08:39:12.999383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:26:25.665 [2024-11-20 08:39:12.999394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.089134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.089196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:25.665 [2024-11-20 08:39:13.089215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.852 ms 00:26:25.665 [2024-11-20 08:39:13.089227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.103744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:25.665 [2024-11-20 08:39:13.107183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.107229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:25.665 [2024-11-20 08:39:13.107246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.891 ms 00:26:25.665 [2024-11-20 08:39:13.107263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.107405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.107419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:25.665 [2024-11-20 08:39:13.107432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:25.665 [2024-11-20 08:39:13.107442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.107525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.107539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:25.665 [2024-11-20 08:39:13.107551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:25.665 [2024-11-20 08:39:13.107561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.107588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.107599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:25.665 [2024-11-20 08:39:13.107610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:25.665 [2024-11-20 08:39:13.107620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.107657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:25.665 [2024-11-20 08:39:13.107670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.107682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:25.665 [2024-11-20 08:39:13.107693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:25.665 [2024-11-20 08:39:13.107708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.665 [2024-11-20 08:39:13.147879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.665 [2024-11-20 08:39:13.148000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:25.665 [2024-11-20 08:39:13.148033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.208 ms 00:26:25.665 [2024-11-20 08:39:13.148055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.666 [2024-11-20 08:39:13.148222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.666 [2024-11-20 08:39:13.148252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:25.666 [2024-11-20 08:39:13.148276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:25.666 [2024-11-20 08:39:13.148296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.666 [2024-11-20 08:39:13.149849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 427.695 ms, result 0 00:26:27.039  [2024-11-20T08:39:15.167Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-20T08:39:16.543Z] Copying: 61/1024 [MB] (31 MBps) [2024-11-20T08:39:17.479Z] Copying: 90/1024 [MB] (29 MBps) [2024-11-20T08:39:18.441Z] Copying: 121/1024 [MB] (30 MBps) [2024-11-20T08:39:19.376Z] Copying: 150/1024 [MB] (29 MBps) [2024-11-20T08:39:20.310Z] Copying: 178/1024 [MB] (27 MBps) [2024-11-20T08:39:21.246Z] Copying: 204/1024 [MB] (26 MBps) [2024-11-20T08:39:22.182Z] Copying: 232/1024 [MB] (27 MBps) [2024-11-20T08:39:23.558Z] Copying: 261/1024 [MB] (29 MBps) [2024-11-20T08:39:24.493Z] Copying: 291/1024 [MB] (29 MBps) [2024-11-20T08:39:25.431Z] Copying: 319/1024 [MB] (27 MBps) [2024-11-20T08:39:26.368Z] Copying: 346/1024 [MB] (27 MBps) [2024-11-20T08:39:27.305Z] Copying: 374/1024 [MB] (28 MBps) [2024-11-20T08:39:28.241Z] Copying: 404/1024 [MB] (29 MBps) [2024-11-20T08:39:29.220Z] Copying: 431/1024 [MB] (27 MBps) [2024-11-20T08:39:30.154Z] Copying: 463/1024 [MB] (31 MBps) [2024-11-20T08:39:31.531Z] Copying: 492/1024 [MB] (29 MBps) [2024-11-20T08:39:32.467Z] Copying: 522/1024 [MB] (29 MBps) [2024-11-20T08:39:33.403Z] Copying: 552/1024 [MB] (30 MBps) [2024-11-20T08:39:34.343Z] Copying: 583/1024 [MB] (31 MBps) [2024-11-20T08:39:35.279Z] Copying: 613/1024 [MB] (29 MBps) [2024-11-20T08:39:36.215Z] Copying: 642/1024 [MB] (29 MBps) [2024-11-20T08:39:37.150Z] Copying: 670/1024 [MB] (28 MBps) [2024-11-20T08:39:38.524Z] Copying: 699/1024 [MB] (29 MBps) [2024-11-20T08:39:39.459Z] Copying: 729/1024 [MB] (29 MBps) [2024-11-20T08:39:40.392Z] Copying: 759/1024 [MB] (30 MBps) [2024-11-20T08:39:41.328Z] Copying: 788/1024 [MB] (29 MBps) [2024-11-20T08:39:42.262Z] Copying: 816/1024 [MB] (27 MBps) [2024-11-20T08:39:43.198Z] Copying: 844/1024 [MB] (28 MBps) [2024-11-20T08:39:44.133Z] Copying: 872/1024 [MB] (28 MBps) [2024-11-20T08:39:45.509Z] Copying: 901/1024 [MB] (28 MBps) [2024-11-20T08:39:46.445Z] Copying: 929/1024 [MB] (28 MBps) [2024-11-20T08:39:47.380Z] Copying: 958/1024 [MB] (29 MBps) [2024-11-20T08:39:48.320Z] Copying: 989/1024 [MB] (30 MBps) [2024-11-20T08:39:49.256Z] Copying: 1018/1024 [MB] (29 MBps) [2024-11-20T08:39:49.256Z] Copying: 1048536/1048576 [kB] (5252 kBps) [2024-11-20T08:39:49.256Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 08:39:49.151089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.695 [2024-11-20 08:39:49.151438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:01.696 [2024-11-20 08:39:49.151472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:01.696 [2024-11-20 08:39:49.151486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.696 [2024-11-20 08:39:49.154717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:01.696 [2024-11-20 08:39:49.161811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.696 [2024-11-20 08:39:49.161871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:01.696 [2024-11-20 08:39:49.161887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.043 ms 00:27:01.696 [2024-11-20 08:39:49.161910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.696 [2024-11-20 08:39:49.171837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.696 [2024-11-20 08:39:49.171939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:01.696 [2024-11-20 08:39:49.171973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.964 ms 00:27:01.696 [2024-11-20 08:39:49.171984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.696 [2024-11-20 08:39:49.195875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.696 [2024-11-20 08:39:49.195961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:01.696 [2024-11-20 08:39:49.195980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.892 ms 00:27:01.696 [2024-11-20 08:39:49.196002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.696 [2024-11-20 08:39:49.201277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.696 [2024-11-20 08:39:49.201339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:01.696 [2024-11-20 08:39:49.201354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.221 ms 00:27:01.696 [2024-11-20 08:39:49.201365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.696 [2024-11-20 08:39:49.241484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.696 [2024-11-20 08:39:49.241562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:01.696 [2024-11-20 08:39:49.241595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.091 ms 00:27:01.696 [2024-11-20 08:39:49.241606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.954 [2024-11-20 08:39:49.265483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.954 [2024-11-20 08:39:49.265567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:01.954 [2024-11-20 08:39:49.265585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.827 ms 00:27:01.954 [2024-11-20 08:39:49.265597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.954 [2024-11-20 08:39:49.375934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.954 [2024-11-20 08:39:49.376050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:01.954 [2024-11-20 08:39:49.376100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.424 ms 00:27:01.954 [2024-11-20 08:39:49.376111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.954 [2024-11-20 08:39:49.418359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.954 [2024-11-20 08:39:49.418437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:01.954 [2024-11-20 08:39:49.418454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.291 ms 00:27:01.954 [2024-11-20 08:39:49.418467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.954 [2024-11-20 08:39:49.459546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.954 [2024-11-20 08:39:49.459622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:01.954 [2024-11-20 08:39:49.459640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.063 ms 00:27:01.954 [2024-11-20 08:39:49.459650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.954 [2024-11-20 08:39:49.500035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.954 [2024-11-20 08:39:49.500109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:01.954 [2024-11-20 08:39:49.500126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.368 ms 00:27:01.954 [2024-11-20 08:39:49.500136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.214 [2024-11-20 08:39:49.541919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.214 [2024-11-20 08:39:49.542006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:02.214 [2024-11-20 08:39:49.542025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.711 ms 00:27:02.214 [2024-11-20 08:39:49.542037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.214 [2024-11-20 08:39:49.542124] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:02.214 [2024-11-20 08:39:49.542145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112128 / 261120 wr_cnt: 1 state: open 00:27:02.214 [2024-11-20 08:39:49.542159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.542980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:02.214 [2024-11-20 08:39:49.543100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:02.215 [2024-11-20 08:39:49.543507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:02.215 [2024-11-20 08:39:49.543518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 405049cc-0b62-4405-9897-35956d2f78a3 00:27:02.215 [2024-11-20 08:39:49.543539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112128 00:27:02.215 [2024-11-20 08:39:49.543551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113088 00:27:02.215 [2024-11-20 08:39:49.543588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112128 00:27:02.215 [2024-11-20 08:39:49.543607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:27:02.215 [2024-11-20 08:39:49.543625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:02.215 [2024-11-20 08:39:49.543636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:02.215 [2024-11-20 08:39:49.543647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:02.215 [2024-11-20 08:39:49.543657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:02.215 [2024-11-20 08:39:49.543666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:02.215 [2024-11-20 08:39:49.543679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.215 [2024-11-20 08:39:49.543690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:02.215 [2024-11-20 08:39:49.543702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:27:02.215 [2024-11-20 08:39:49.543712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.565034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.215 [2024-11-20 08:39:49.565089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:02.215 [2024-11-20 08:39:49.565106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.292 ms 00:27:02.215 [2024-11-20 08:39:49.565118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.565624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.215 [2024-11-20 08:39:49.565646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:02.215 [2024-11-20 08:39:49.565659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:27:02.215 [2024-11-20 08:39:49.565679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.619686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.215 [2024-11-20 08:39:49.619754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:02.215 [2024-11-20 08:39:49.619770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.215 [2024-11-20 08:39:49.619781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.619866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.215 [2024-11-20 08:39:49.619878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:02.215 [2024-11-20 08:39:49.619889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.215 [2024-11-20 08:39:49.619904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.620000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.215 [2024-11-20 08:39:49.620016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:02.215 [2024-11-20 08:39:49.620027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.215 [2024-11-20 08:39:49.620037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.620055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.215 [2024-11-20 08:39:49.620066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:02.215 [2024-11-20 08:39:49.620077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.215 [2024-11-20 08:39:49.620087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.215 [2024-11-20 08:39:49.748161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.215 [2024-11-20 08:39:49.748225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:02.215 [2024-11-20 08:39:49.748254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.215 [2024-11-20 08:39:49.748282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.854326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.854400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:02.474 [2024-11-20 08:39:49.854417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.854455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.854562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.854575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:02.474 [2024-11-20 08:39:49.854587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.854598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.854645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.854657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:02.474 [2024-11-20 08:39:49.854669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.854679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.854799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.854820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:02.474 [2024-11-20 08:39:49.854839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.854856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.854919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.854941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:02.474 [2024-11-20 08:39:49.854962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.854978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.855051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.855075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:02.474 [2024-11-20 08:39:49.855096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.855113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.855166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.474 [2024-11-20 08:39:49.855180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:02.474 [2024-11-20 08:39:49.855193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.474 [2024-11-20 08:39:49.855204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.474 [2024-11-20 08:39:49.855357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 707.192 ms, result 0 00:27:04.401 00:27:04.401 00:27:04.401 08:39:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:06.305 08:39:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:06.305 [2024-11-20 08:39:53.440374] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:27:06.305 [2024-11-20 08:39:53.440504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79328 ] 00:27:06.305 [2024-11-20 08:39:53.605516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.305 [2024-11-20 08:39:53.763600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.872 [2024-11-20 08:39:54.124734] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.873 [2024-11-20 08:39:54.124805] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.873 [2024-11-20 08:39:54.286431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.286499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:06.873 [2024-11-20 08:39:54.286520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:06.873 [2024-11-20 08:39:54.286531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.286584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.286596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.873 [2024-11-20 08:39:54.286610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:06.873 [2024-11-20 08:39:54.286620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.286642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:06.873 [2024-11-20 08:39:54.287640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:06.873 [2024-11-20 08:39:54.287670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.287681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.873 [2024-11-20 08:39:54.287692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:27:06.873 [2024-11-20 08:39:54.287702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.289172] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:06.873 [2024-11-20 08:39:54.309456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.309504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:06.873 [2024-11-20 08:39:54.309520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.316 ms 00:27:06.873 [2024-11-20 08:39:54.309530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.309604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.309616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:06.873 [2024-11-20 08:39:54.309627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:06.873 [2024-11-20 08:39:54.309637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.316523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.316559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.873 [2024-11-20 08:39:54.316571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:27:06.873 [2024-11-20 08:39:54.316581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.316670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.316685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.873 [2024-11-20 08:39:54.316696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:06.873 [2024-11-20 08:39:54.316706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.316750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.316763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:06.873 [2024-11-20 08:39:54.316774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:06.873 [2024-11-20 08:39:54.316783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.316810] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:06.873 [2024-11-20 08:39:54.321599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.321632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.873 [2024-11-20 08:39:54.321644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:27:06.873 [2024-11-20 08:39:54.321658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.321690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.321700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:06.873 [2024-11-20 08:39:54.321711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:06.873 [2024-11-20 08:39:54.321721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.321776] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:06.873 [2024-11-20 08:39:54.321799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:06.873 [2024-11-20 08:39:54.321833] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:06.873 [2024-11-20 08:39:54.321854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:06.873 [2024-11-20 08:39:54.321943] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:06.873 [2024-11-20 08:39:54.321956] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:06.873 [2024-11-20 08:39:54.321970] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:06.873 [2024-11-20 08:39:54.321982] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322007] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322019] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:06.873 [2024-11-20 08:39:54.322029] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:06.873 [2024-11-20 08:39:54.322038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:06.873 [2024-11-20 08:39:54.322048] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:06.873 [2024-11-20 08:39:54.322063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.322074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:06.873 [2024-11-20 08:39:54.322092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:06.873 [2024-11-20 08:39:54.322102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.322175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.873 [2024-11-20 08:39:54.322186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:06.873 [2024-11-20 08:39:54.322196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:06.873 [2024-11-20 08:39:54.322205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.873 [2024-11-20 08:39:54.322300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:06.873 [2024-11-20 08:39:54.322319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:06.873 [2024-11-20 08:39:54.322329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:06.873 [2024-11-20 08:39:54.322359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:06.873 [2024-11-20 08:39:54.322387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.873 [2024-11-20 08:39:54.322406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:06.873 [2024-11-20 08:39:54.322417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:06.873 [2024-11-20 08:39:54.322426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.873 [2024-11-20 08:39:54.322435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:06.873 [2024-11-20 08:39:54.322444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:06.873 [2024-11-20 08:39:54.322462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:06.873 [2024-11-20 08:39:54.322480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:06.873 [2024-11-20 08:39:54.322508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:06.873 [2024-11-20 08:39:54.322536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.873 [2024-11-20 08:39:54.322553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:06.873 [2024-11-20 08:39:54.322563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:06.873 [2024-11-20 08:39:54.322572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.874 [2024-11-20 08:39:54.322580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:06.874 [2024-11-20 08:39:54.322590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:06.874 [2024-11-20 08:39:54.322598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.874 [2024-11-20 08:39:54.322607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:06.874 [2024-11-20 08:39:54.322616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:06.874 [2024-11-20 08:39:54.322625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.874 [2024-11-20 08:39:54.322633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:06.874 [2024-11-20 08:39:54.322643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:06.874 [2024-11-20 08:39:54.322652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.874 [2024-11-20 08:39:54.322661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:06.874 [2024-11-20 08:39:54.322670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:06.874 [2024-11-20 08:39:54.322678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.874 [2024-11-20 08:39:54.322687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:06.874 [2024-11-20 08:39:54.322696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:06.874 [2024-11-20 08:39:54.322705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.874 [2024-11-20 08:39:54.322714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:06.874 [2024-11-20 08:39:54.322724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:06.874 [2024-11-20 08:39:54.322733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.874 [2024-11-20 08:39:54.322742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.874 [2024-11-20 08:39:54.322752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:06.874 [2024-11-20 08:39:54.322762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:06.874 [2024-11-20 08:39:54.322771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:06.874 [2024-11-20 08:39:54.322780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:06.874 [2024-11-20 08:39:54.322789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:06.874 [2024-11-20 08:39:54.322798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:06.874 [2024-11-20 08:39:54.322808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:06.874 [2024-11-20 08:39:54.322820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.322832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:06.874 [2024-11-20 08:39:54.322842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:06.874 [2024-11-20 08:39:54.322852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:06.874 [2024-11-20 08:39:54.322862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:06.874 [2024-11-20 08:39:54.322873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:06.874 [2024-11-20 08:39:54.322883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:06.874 [2024-11-20 08:39:54.322893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:06.874 [2024-11-20 08:39:54.322903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:06.874 [2024-11-20 08:39:54.322913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:06.874 [2024-11-20 08:39:54.322923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.322933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.322943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.322953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.322963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:06.874 [2024-11-20 08:39:54.322972] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:06.874 [2024-11-20 08:39:54.322996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.323008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.874 [2024-11-20 08:39:54.323019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:06.874 [2024-11-20 08:39:54.323029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:06.874 [2024-11-20 08:39:54.323040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:06.874 [2024-11-20 08:39:54.323052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.323063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:06.874 [2024-11-20 08:39:54.323073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:27:06.874 [2024-11-20 08:39:54.323082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.365012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.365068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.874 [2024-11-20 08:39:54.365083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.948 ms 00:27:06.874 [2024-11-20 08:39:54.365094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.365198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.365209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:06.874 [2024-11-20 08:39:54.365219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:06.874 [2024-11-20 08:39:54.365229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.428594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.428650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.874 [2024-11-20 08:39:54.428665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.388 ms 00:27:06.874 [2024-11-20 08:39:54.428675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.428736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.428748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.874 [2024-11-20 08:39:54.428760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:06.874 [2024-11-20 08:39:54.428774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.429278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.429300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.874 [2024-11-20 08:39:54.429312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:27:06.874 [2024-11-20 08:39:54.429322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.874 [2024-11-20 08:39:54.429441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.874 [2024-11-20 08:39:54.429455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.874 [2024-11-20 08:39:54.429466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:06.874 [2024-11-20 08:39:54.429481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.450165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.450215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.134 [2024-11-20 08:39:54.450233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.695 ms 00:27:07.134 [2024-11-20 08:39:54.450245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.470579] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:07.134 [2024-11-20 08:39:54.470650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:07.134 [2024-11-20 08:39:54.470667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.470678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:07.134 [2024-11-20 08:39:54.470691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.316 ms 00:27:07.134 [2024-11-20 08:39:54.470702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.501156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.501220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:07.134 [2024-11-20 08:39:54.501235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.447 ms 00:27:07.134 [2024-11-20 08:39:54.501245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.520947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.521019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:07.134 [2024-11-20 08:39:54.521034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:27:07.134 [2024-11-20 08:39:54.521044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.539686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.539740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:07.134 [2024-11-20 08:39:54.539753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:27:07.134 [2024-11-20 08:39:54.539763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.540531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.540563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:07.134 [2024-11-20 08:39:54.540575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:27:07.134 [2024-11-20 08:39:54.540589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.628751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.628821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:07.134 [2024-11-20 08:39:54.628844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.281 ms 00:27:07.134 [2024-11-20 08:39:54.628855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.640954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:07.134 [2024-11-20 08:39:54.644168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.134 [2024-11-20 08:39:54.644206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:07.134 [2024-11-20 08:39:54.644221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.265 ms 00:27:07.134 [2024-11-20 08:39:54.644231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.134 [2024-11-20 08:39:54.644346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.644361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:07.135 [2024-11-20 08:39:54.644372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:07.135 [2024-11-20 08:39:54.644385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.645888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.645926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:07.135 [2024-11-20 08:39:54.645939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:27:07.135 [2024-11-20 08:39:54.645949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.646009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.646021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:07.135 [2024-11-20 08:39:54.646032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:07.135 [2024-11-20 08:39:54.646042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.646088] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:07.135 [2024-11-20 08:39:54.646105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.646116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:07.135 [2024-11-20 08:39:54.646126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:07.135 [2024-11-20 08:39:54.646136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.684040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.684102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:07.135 [2024-11-20 08:39:54.684119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.941 ms 00:27:07.135 [2024-11-20 08:39:54.684136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.684244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.135 [2024-11-20 08:39:54.684257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:07.135 [2024-11-20 08:39:54.684269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:07.135 [2024-11-20 08:39:54.684279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.135 [2024-11-20 08:39:54.685494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.225 ms, result 0 00:27:08.512  [2024-11-20T08:39:57.010Z] Copying: 1208/1048576 [kB] (1208 kBps) [2024-11-20T08:39:57.946Z] Copying: 9600/1048576 [kB] (8392 kBps) [2024-11-20T08:39:59.321Z] Copying: 45/1024 [MB] (36 MBps) [2024-11-20T08:39:59.900Z] Copying: 82/1024 [MB] (36 MBps) [2024-11-20T08:40:01.275Z] Copying: 118/1024 [MB] (35 MBps) [2024-11-20T08:40:02.210Z] Copying: 154/1024 [MB] (36 MBps) [2024-11-20T08:40:03.147Z] Copying: 189/1024 [MB] (34 MBps) [2024-11-20T08:40:04.082Z] Copying: 224/1024 [MB] (34 MBps) [2024-11-20T08:40:05.021Z] Copying: 260/1024 [MB] (36 MBps) [2024-11-20T08:40:05.956Z] Copying: 297/1024 [MB] (37 MBps) [2024-11-20T08:40:06.893Z] Copying: 334/1024 [MB] (36 MBps) [2024-11-20T08:40:08.273Z] Copying: 368/1024 [MB] (34 MBps) [2024-11-20T08:40:09.213Z] Copying: 403/1024 [MB] (35 MBps) [2024-11-20T08:40:10.149Z] Copying: 437/1024 [MB] (33 MBps) [2024-11-20T08:40:11.086Z] Copying: 472/1024 [MB] (35 MBps) [2024-11-20T08:40:12.022Z] Copying: 506/1024 [MB] (34 MBps) [2024-11-20T08:40:12.960Z] Copying: 541/1024 [MB] (34 MBps) [2024-11-20T08:40:13.926Z] Copying: 574/1024 [MB] (33 MBps) [2024-11-20T08:40:15.303Z] Copying: 610/1024 [MB] (35 MBps) [2024-11-20T08:40:16.240Z] Copying: 648/1024 [MB] (37 MBps) [2024-11-20T08:40:17.177Z] Copying: 685/1024 [MB] (37 MBps) [2024-11-20T08:40:18.137Z] Copying: 722/1024 [MB] (37 MBps) [2024-11-20T08:40:19.073Z] Copying: 758/1024 [MB] (36 MBps) [2024-11-20T08:40:20.012Z] Copying: 794/1024 [MB] (35 MBps) [2024-11-20T08:40:20.951Z] Copying: 831/1024 [MB] (37 MBps) [2024-11-20T08:40:21.962Z] Copying: 868/1024 [MB] (36 MBps) [2024-11-20T08:40:22.931Z] Copying: 903/1024 [MB] (34 MBps) [2024-11-20T08:40:23.866Z] Copying: 938/1024 [MB] (35 MBps) [2024-11-20T08:40:25.241Z] Copying: 973/1024 [MB] (35 MBps) [2024-11-20T08:40:25.241Z] Copying: 1010/1024 [MB] (37 MBps) [2024-11-20T08:40:26.616Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 08:40:26.225323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.225391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:39.055 [2024-11-20 08:40:26.225411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:39.055 [2024-11-20 08:40:26.225442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.225470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:39.055 [2024-11-20 08:40:26.230824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.230860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:39.055 [2024-11-20 08:40:26.230873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.341 ms 00:27:39.055 [2024-11-20 08:40:26.230884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.231104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.231118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:39.055 [2024-11-20 08:40:26.231129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:27:39.055 [2024-11-20 08:40:26.231143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.243309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.243457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:39.055 [2024-11-20 08:40:26.243473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.165 ms 00:27:39.055 [2024-11-20 08:40:26.243498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.248817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.248850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:39.055 [2024-11-20 08:40:26.248863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:27:39.055 [2024-11-20 08:40:26.248873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.288507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.288576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:39.055 [2024-11-20 08:40:26.288592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.617 ms 00:27:39.055 [2024-11-20 08:40:26.288603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.310908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.310971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:39.055 [2024-11-20 08:40:26.310993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.259 ms 00:27:39.055 [2024-11-20 08:40:26.311004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.313328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.313365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:39.055 [2024-11-20 08:40:26.313379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:27:39.055 [2024-11-20 08:40:26.313390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.350232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.350269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:39.055 [2024-11-20 08:40:26.350282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.882 ms 00:27:39.055 [2024-11-20 08:40:26.350293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.387070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.387131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:39.055 [2024-11-20 08:40:26.387159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.794 ms 00:27:39.055 [2024-11-20 08:40:26.387170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.423710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.423753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:39.055 [2024-11-20 08:40:26.423768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.537 ms 00:27:39.055 [2024-11-20 08:40:26.423785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.459606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.055 [2024-11-20 08:40:26.459647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:39.055 [2024-11-20 08:40:26.459661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.791 ms 00:27:39.055 [2024-11-20 08:40:26.459671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.055 [2024-11-20 08:40:26.459710] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:39.055 [2024-11-20 08:40:26.459727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:39.055 [2024-11-20 08:40:26.459740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:39.055 [2024-11-20 08:40:26.459751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.459982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:39.055 [2024-11-20 08:40:26.460164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:39.056 [2024-11-20 08:40:26.460811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:39.056 [2024-11-20 08:40:26.460821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 405049cc-0b62-4405-9897-35956d2f78a3 00:27:39.056 [2024-11-20 08:40:26.460832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:39.056 [2024-11-20 08:40:26.460842] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 152512 00:27:39.056 [2024-11-20 08:40:26.460852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 150528 00:27:39.056 [2024-11-20 08:40:26.460863] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0132 00:27:39.056 [2024-11-20 08:40:26.460876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:39.056 [2024-11-20 08:40:26.460886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:39.056 [2024-11-20 08:40:26.460896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:39.056 [2024-11-20 08:40:26.460916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:39.056 [2024-11-20 08:40:26.460925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:39.056 [2024-11-20 08:40:26.460935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.056 [2024-11-20 08:40:26.460945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:39.056 [2024-11-20 08:40:26.460956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:27:39.056 [2024-11-20 08:40:26.460965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.056 [2024-11-20 08:40:26.480888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.056 [2024-11-20 08:40:26.480926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:39.056 [2024-11-20 08:40:26.480946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.907 ms 00:27:39.056 [2024-11-20 08:40:26.480956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.056 [2024-11-20 08:40:26.481462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.056 [2024-11-20 08:40:26.481476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:39.056 [2024-11-20 08:40:26.481487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:27:39.056 [2024-11-20 08:40:26.481497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.056 [2024-11-20 08:40:26.533650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.056 [2024-11-20 08:40:26.533721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.056 [2024-11-20 08:40:26.533736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.056 [2024-11-20 08:40:26.533747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.056 [2024-11-20 08:40:26.533824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.056 [2024-11-20 08:40:26.533836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.056 [2024-11-20 08:40:26.533846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.056 [2024-11-20 08:40:26.533857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.056 [2024-11-20 08:40:26.533947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.056 [2024-11-20 08:40:26.533960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.056 [2024-11-20 08:40:26.533975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.057 [2024-11-20 08:40:26.533996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.057 [2024-11-20 08:40:26.534015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.057 [2024-11-20 08:40:26.534025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.057 [2024-11-20 08:40:26.534035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.057 [2024-11-20 08:40:26.534045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.660010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.660080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.316 [2024-11-20 08:40:26.660105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.660116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.763872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.763935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.316 [2024-11-20 08:40:26.763950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.763961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.316 [2024-11-20 08:40:26.764125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.316 [2024-11-20 08:40:26.764209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.316 [2024-11-20 08:40:26.764428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:39.316 [2024-11-20 08:40:26.764499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.316 [2024-11-20 08:40:26.764567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.316 [2024-11-20 08:40:26.764632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.316 [2024-11-20 08:40:26.764642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.316 [2024-11-20 08:40:26.764653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.316 [2024-11-20 08:40:26.764767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.292 ms, result 0 00:27:40.250 00:27:40.250 00:27:40.509 08:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:42.413 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:42.413 08:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:42.413 [2024-11-20 08:40:29.645811] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:27:42.413 [2024-11-20 08:40:29.645929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79687 ] 00:27:42.413 [2024-11-20 08:40:29.828029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.413 [2024-11-20 08:40:29.941543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.981 [2024-11-20 08:40:30.302721] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.981 [2024-11-20 08:40:30.302795] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.981 [2024-11-20 08:40:30.464620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.464679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:42.981 [2024-11-20 08:40:30.464700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:42.981 [2024-11-20 08:40:30.464711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.464758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.464770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.981 [2024-11-20 08:40:30.464785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:42.981 [2024-11-20 08:40:30.464795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.464816] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:42.981 [2024-11-20 08:40:30.465841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:42.981 [2024-11-20 08:40:30.465872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.465883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.981 [2024-11-20 08:40:30.465894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:27:42.981 [2024-11-20 08:40:30.465904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.467341] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:42.981 [2024-11-20 08:40:30.486863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.486905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:42.981 [2024-11-20 08:40:30.486922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.554 ms 00:27:42.981 [2024-11-20 08:40:30.486933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.487020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.487034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:42.981 [2024-11-20 08:40:30.487047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:42.981 [2024-11-20 08:40:30.487057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.494241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.494282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.981 [2024-11-20 08:40:30.494296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.119 ms 00:27:42.981 [2024-11-20 08:40:30.494306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.494399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.494415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.981 [2024-11-20 08:40:30.494426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:42.981 [2024-11-20 08:40:30.494436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.494485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.494497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:42.981 [2024-11-20 08:40:30.494508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:42.981 [2024-11-20 08:40:30.494518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.494546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:42.981 [2024-11-20 08:40:30.499467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.499501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.981 [2024-11-20 08:40:30.499514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:27:42.981 [2024-11-20 08:40:30.499528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.499562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.981 [2024-11-20 08:40:30.499573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:42.981 [2024-11-20 08:40:30.499583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:42.981 [2024-11-20 08:40:30.499593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.981 [2024-11-20 08:40:30.499647] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:42.981 [2024-11-20 08:40:30.499670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:42.981 [2024-11-20 08:40:30.499706] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:42.981 [2024-11-20 08:40:30.499727] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:42.981 [2024-11-20 08:40:30.499816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:42.981 [2024-11-20 08:40:30.499829] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:42.981 [2024-11-20 08:40:30.499842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:42.981 [2024-11-20 08:40:30.499855] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:42.981 [2024-11-20 08:40:30.499868] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:42.981 [2024-11-20 08:40:30.499879] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:42.982 [2024-11-20 08:40:30.499889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:42.982 [2024-11-20 08:40:30.499899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:42.982 [2024-11-20 08:40:30.499909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:42.982 [2024-11-20 08:40:30.499923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.982 [2024-11-20 08:40:30.499934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:42.982 [2024-11-20 08:40:30.499944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:42.982 [2024-11-20 08:40:30.499954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.982 [2024-11-20 08:40:30.500043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.982 [2024-11-20 08:40:30.500055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:42.982 [2024-11-20 08:40:30.500065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:42.982 [2024-11-20 08:40:30.500075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.982 [2024-11-20 08:40:30.500172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:42.982 [2024-11-20 08:40:30.500190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:42.982 [2024-11-20 08:40:30.500201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:42.982 [2024-11-20 08:40:30.500231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:42.982 [2024-11-20 08:40:30.500260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.982 [2024-11-20 08:40:30.500280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:42.982 [2024-11-20 08:40:30.500289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:42.982 [2024-11-20 08:40:30.500298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.982 [2024-11-20 08:40:30.500307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:42.982 [2024-11-20 08:40:30.500316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:42.982 [2024-11-20 08:40:30.500335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:42.982 [2024-11-20 08:40:30.500354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:42.982 [2024-11-20 08:40:30.500382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:42.982 [2024-11-20 08:40:30.500410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:42.982 [2024-11-20 08:40:30.500437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:42.982 [2024-11-20 08:40:30.500465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:42.982 [2024-11-20 08:40:30.500492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.982 [2024-11-20 08:40:30.500509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:42.982 [2024-11-20 08:40:30.500518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:42.982 [2024-11-20 08:40:30.500527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.982 [2024-11-20 08:40:30.500536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:42.982 [2024-11-20 08:40:30.500545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:42.982 [2024-11-20 08:40:30.500554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:42.982 [2024-11-20 08:40:30.500572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:42.982 [2024-11-20 08:40:30.500581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:42.982 [2024-11-20 08:40:30.500600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:42.982 [2024-11-20 08:40:30.500610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.982 [2024-11-20 08:40:30.500629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:42.982 [2024-11-20 08:40:30.500638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:42.982 [2024-11-20 08:40:30.500647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:42.982 [2024-11-20 08:40:30.500657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:42.982 [2024-11-20 08:40:30.500666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:42.982 [2024-11-20 08:40:30.500675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:42.982 [2024-11-20 08:40:30.500686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:42.982 [2024-11-20 08:40:30.500698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.982 [2024-11-20 08:40:30.500709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:42.982 [2024-11-20 08:40:30.500719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:42.982 [2024-11-20 08:40:30.500729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:42.982 [2024-11-20 08:40:30.500739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:42.982 [2024-11-20 08:40:30.500750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:42.982 [2024-11-20 08:40:30.500760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:42.982 [2024-11-20 08:40:30.500770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:42.982 [2024-11-20 08:40:30.500781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:42.983 [2024-11-20 08:40:30.500791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:42.983 [2024-11-20 08:40:30.500801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:42.983 [2024-11-20 08:40:30.500852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:42.983 [2024-11-20 08:40:30.500869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.983 [2024-11-20 08:40:30.500890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:42.983 [2024-11-20 08:40:30.500900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:42.983 [2024-11-20 08:40:30.500912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:42.983 [2024-11-20 08:40:30.500923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.983 [2024-11-20 08:40:30.500934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:42.983 [2024-11-20 08:40:30.500944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:27:42.983 [2024-11-20 08:40:30.500954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.983 [2024-11-20 08:40:30.536533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.983 [2024-11-20 08:40:30.536584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.983 [2024-11-20 08:40:30.536599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.579 ms 00:27:42.983 [2024-11-20 08:40:30.536610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.983 [2024-11-20 08:40:30.536706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.983 [2024-11-20 08:40:30.536718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:42.983 [2024-11-20 08:40:30.536728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:42.983 [2024-11-20 08:40:30.536739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.593701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.243 [2024-11-20 08:40:30.593766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:43.243 [2024-11-20 08:40:30.593781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.982 ms 00:27:43.243 [2024-11-20 08:40:30.593792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.593856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.243 [2024-11-20 08:40:30.593867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:43.243 [2024-11-20 08:40:30.593879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:43.243 [2024-11-20 08:40:30.593895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.594421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.243 [2024-11-20 08:40:30.594438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:43.243 [2024-11-20 08:40:30.594450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:27:43.243 [2024-11-20 08:40:30.594460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.594593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.243 [2024-11-20 08:40:30.594607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:43.243 [2024-11-20 08:40:30.594618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:27:43.243 [2024-11-20 08:40:30.594634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.615179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.243 [2024-11-20 08:40:30.615230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:43.243 [2024-11-20 08:40:30.615249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.556 ms 00:27:43.243 [2024-11-20 08:40:30.615260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.243 [2024-11-20 08:40:30.635723] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:43.243 [2024-11-20 08:40:30.635798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:43.244 [2024-11-20 08:40:30.635817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.635828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:43.244 [2024-11-20 08:40:30.635841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.452 ms 00:27:43.244 [2024-11-20 08:40:30.635851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.244 [2024-11-20 08:40:30.667483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.667793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:43.244 [2024-11-20 08:40:30.667823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.599 ms 00:27:43.244 [2024-11-20 08:40:30.667835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.244 [2024-11-20 08:40:30.688045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.688114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:43.244 [2024-11-20 08:40:30.688130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.112 ms 00:27:43.244 [2024-11-20 08:40:30.688141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.244 [2024-11-20 08:40:30.707691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.707759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:43.244 [2024-11-20 08:40:30.707775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.511 ms 00:27:43.244 [2024-11-20 08:40:30.707785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.244 [2024-11-20 08:40:30.708632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.708668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:43.244 [2024-11-20 08:40:30.708681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:27:43.244 [2024-11-20 08:40:30.708696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.244 [2024-11-20 08:40:30.797030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.244 [2024-11-20 08:40:30.797332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:43.244 [2024-11-20 08:40:30.797371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.444 ms 00:27:43.244 [2024-11-20 08:40:30.797383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.811562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:43.504 [2024-11-20 08:40:30.814844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.814899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:43.504 [2024-11-20 08:40:30.814916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.422 ms 00:27:43.504 [2024-11-20 08:40:30.814927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.815059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.815075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:43.504 [2024-11-20 08:40:30.815088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:43.504 [2024-11-20 08:40:30.815103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.816005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.816036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:43.504 [2024-11-20 08:40:30.816049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:27:43.504 [2024-11-20 08:40:30.816060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.816091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.816103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:43.504 [2024-11-20 08:40:30.816114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:43.504 [2024-11-20 08:40:30.816125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.816160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:43.504 [2024-11-20 08:40:30.816177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.816188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:43.504 [2024-11-20 08:40:30.816198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:43.504 [2024-11-20 08:40:30.816220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.855770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.855824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:43.504 [2024-11-20 08:40:30.855842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.591 ms 00:27:43.504 [2024-11-20 08:40:30.855859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.855963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.504 [2024-11-20 08:40:30.855976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:43.504 [2024-11-20 08:40:30.855997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:43.504 [2024-11-20 08:40:30.856009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.504 [2024-11-20 08:40:30.857164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.711 ms, result 0 00:27:44.881  [2024-11-20T08:40:33.377Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T08:40:34.355Z] Copying: 58/1024 [MB] (29 MBps) [2024-11-20T08:40:35.293Z] Copying: 89/1024 [MB] (31 MBps) [2024-11-20T08:40:36.228Z] Copying: 121/1024 [MB] (31 MBps) [2024-11-20T08:40:37.162Z] Copying: 151/1024 [MB] (30 MBps) [2024-11-20T08:40:38.094Z] Copying: 183/1024 [MB] (31 MBps) [2024-11-20T08:40:39.469Z] Copying: 212/1024 [MB] (29 MBps) [2024-11-20T08:40:40.405Z] Copying: 242/1024 [MB] (29 MBps) [2024-11-20T08:40:41.343Z] Copying: 271/1024 [MB] (28 MBps) [2024-11-20T08:40:42.282Z] Copying: 301/1024 [MB] (30 MBps) [2024-11-20T08:40:43.219Z] Copying: 331/1024 [MB] (30 MBps) [2024-11-20T08:40:44.156Z] Copying: 365/1024 [MB] (34 MBps) [2024-11-20T08:40:45.094Z] Copying: 398/1024 [MB] (32 MBps) [2024-11-20T08:40:46.473Z] Copying: 427/1024 [MB] (28 MBps) [2024-11-20T08:40:47.410Z] Copying: 456/1024 [MB] (29 MBps) [2024-11-20T08:40:48.350Z] Copying: 484/1024 [MB] (28 MBps) [2024-11-20T08:40:49.289Z] Copying: 513/1024 [MB] (28 MBps) [2024-11-20T08:40:50.225Z] Copying: 542/1024 [MB] (28 MBps) [2024-11-20T08:40:51.162Z] Copying: 572/1024 [MB] (29 MBps) [2024-11-20T08:40:52.098Z] Copying: 602/1024 [MB] (29 MBps) [2024-11-20T08:40:53.069Z] Copying: 632/1024 [MB] (30 MBps) [2024-11-20T08:40:54.445Z] Copying: 662/1024 [MB] (29 MBps) [2024-11-20T08:40:55.380Z] Copying: 693/1024 [MB] (30 MBps) [2024-11-20T08:40:56.317Z] Copying: 721/1024 [MB] (28 MBps) [2024-11-20T08:40:57.255Z] Copying: 750/1024 [MB] (28 MBps) [2024-11-20T08:40:58.193Z] Copying: 777/1024 [MB] (27 MBps) [2024-11-20T08:40:59.131Z] Copying: 806/1024 [MB] (29 MBps) [2024-11-20T08:41:00.067Z] Copying: 834/1024 [MB] (27 MBps) [2024-11-20T08:41:01.445Z] Copying: 863/1024 [MB] (28 MBps) [2024-11-20T08:41:02.384Z] Copying: 890/1024 [MB] (27 MBps) [2024-11-20T08:41:03.320Z] Copying: 918/1024 [MB] (27 MBps) [2024-11-20T08:41:04.256Z] Copying: 946/1024 [MB] (27 MBps) [2024-11-20T08:41:05.191Z] Copying: 973/1024 [MB] (27 MBps) [2024-11-20T08:41:06.138Z] Copying: 1002/1024 [MB] (28 MBps) [2024-11-20T08:41:06.138Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 08:41:05.821958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.822093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:18.577 [2024-11-20 08:41:05.822122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:18.577 [2024-11-20 08:41:05.822142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.822184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:18.577 [2024-11-20 08:41:05.830406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.830446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:18.577 [2024-11-20 08:41:05.830470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.203 ms 00:28:18.577 [2024-11-20 08:41:05.830483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.830762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.830778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:18.577 [2024-11-20 08:41:05.830792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:28:18.577 [2024-11-20 08:41:05.830805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.834287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.834310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:18.577 [2024-11-20 08:41:05.834326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:28:18.577 [2024-11-20 08:41:05.834339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.840594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.840622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:18.577 [2024-11-20 08:41:05.840633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:28:18.577 [2024-11-20 08:41:05.840644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.877830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.877872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:18.577 [2024-11-20 08:41:05.877887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.176 ms 00:28:18.577 [2024-11-20 08:41:05.877897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.899076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.899255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:18.577 [2024-11-20 08:41:05.899280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.167 ms 00:28:18.577 [2024-11-20 08:41:05.899291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.901622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.901659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:18.577 [2024-11-20 08:41:05.901671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:28:18.577 [2024-11-20 08:41:05.901681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.938406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.938489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:18.577 [2024-11-20 08:41:05.938506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.766 ms 00:28:18.577 [2024-11-20 08:41:05.938517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:05.974303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:05.974347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:18.577 [2024-11-20 08:41:05.974360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.767 ms 00:28:18.577 [2024-11-20 08:41:05.974370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:06.010121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:06.010159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:18.577 [2024-11-20 08:41:06.010172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.769 ms 00:28:18.577 [2024-11-20 08:41:06.010182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:06.045562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.577 [2024-11-20 08:41:06.045732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:18.577 [2024-11-20 08:41:06.045755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.353 ms 00:28:18.577 [2024-11-20 08:41:06.045765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.577 [2024-11-20 08:41:06.045822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:18.577 [2024-11-20 08:41:06.045841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:18.577 [2024-11-20 08:41:06.045861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:18.577 [2024-11-20 08:41:06.045873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.045980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:18.577 [2024-11-20 08:41:06.046103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:18.578 [2024-11-20 08:41:06.046962] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:18.578 [2024-11-20 08:41:06.046975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 405049cc-0b62-4405-9897-35956d2f78a3 00:28:18.578 [2024-11-20 08:41:06.047001] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:18.578 [2024-11-20 08:41:06.047012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:18.578 [2024-11-20 08:41:06.047022] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:18.578 [2024-11-20 08:41:06.047032] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:18.578 [2024-11-20 08:41:06.047042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:18.578 [2024-11-20 08:41:06.047052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:18.578 [2024-11-20 08:41:06.047073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:18.578 [2024-11-20 08:41:06.047083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:18.578 [2024-11-20 08:41:06.047092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:18.578 [2024-11-20 08:41:06.047102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.578 [2024-11-20 08:41:06.047113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:18.578 [2024-11-20 08:41:06.047123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:28:18.578 [2024-11-20 08:41:06.047133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.578 [2024-11-20 08:41:06.067096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.578 [2024-11-20 08:41:06.067125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:18.578 [2024-11-20 08:41:06.067138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.937 ms 00:28:18.578 [2024-11-20 08:41:06.067148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.578 [2024-11-20 08:41:06.067650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.578 [2024-11-20 08:41:06.067661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:18.578 [2024-11-20 08:41:06.067678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:28:18.578 [2024-11-20 08:41:06.067688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.578 [2024-11-20 08:41:06.118270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.579 [2024-11-20 08:41:06.118324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:18.579 [2024-11-20 08:41:06.118340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.579 [2024-11-20 08:41:06.118351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.579 [2024-11-20 08:41:06.118425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.579 [2024-11-20 08:41:06.118436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:18.579 [2024-11-20 08:41:06.118452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.579 [2024-11-20 08:41:06.118462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.579 [2024-11-20 08:41:06.118547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.579 [2024-11-20 08:41:06.118561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:18.579 [2024-11-20 08:41:06.118571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.579 [2024-11-20 08:41:06.118581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.579 [2024-11-20 08:41:06.118598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.579 [2024-11-20 08:41:06.118609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:18.579 [2024-11-20 08:41:06.118619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.579 [2024-11-20 08:41:06.118634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.241705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.241750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:18.843 [2024-11-20 08:41:06.241765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.241776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:18.843 [2024-11-20 08:41:06.342299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:18.843 [2024-11-20 08:41:06.342432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:18.843 [2024-11-20 08:41:06.342506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:18.843 [2024-11-20 08:41:06.342671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:18.843 [2024-11-20 08:41:06.342739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:18.843 [2024-11-20 08:41:06.342811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.342863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:18.843 [2024-11-20 08:41:06.342875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:18.843 [2024-11-20 08:41:06.342885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:18.843 [2024-11-20 08:41:06.342896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.843 [2024-11-20 08:41:06.343038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.887 ms, result 0 00:28:20.222 00:28:20.222 00:28:20.222 08:41:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:21.652 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:21.652 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:21.652 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:21.652 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:21.653 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:21.910 Process with pid 78010 is not found 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78010 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@957 -- # '[' -z 78010 ']' 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@961 -- # kill -0 78010 00:28:21.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (78010) - No such process 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@984 -- # echo 'Process with pid 78010 is not found' 00:28:21.910 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:22.477 Remove shared memory files 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:22.477 ************************************ 00:28:22.477 END TEST ftl_dirty_shutdown 00:28:22.477 ************************************ 00:28:22.477 00:28:22.477 real 3m21.695s 00:28:22.477 user 3m45.921s 00:28:22.477 sys 0m37.296s 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1133 -- # xtrace_disable 00:28:22.477 08:41:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.477 08:41:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:22.477 08:41:09 ftl -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:28:22.477 08:41:09 ftl -- common/autotest_common.sh@1114 -- # xtrace_disable 00:28:22.477 08:41:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:22.477 ************************************ 00:28:22.477 START TEST ftl_upgrade_shutdown 00:28:22.477 ************************************ 00:28:22.477 08:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:22.477 * Looking for test storage... 00:28:22.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.477 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:28:22.477 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1638 -- # lcov --version 00:28:22.477 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:28:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.737 --rc genhtml_branch_coverage=1 00:28:22.737 --rc genhtml_function_coverage=1 00:28:22.737 --rc genhtml_legend=1 00:28:22.737 --rc geninfo_all_blocks=1 00:28:22.737 --rc geninfo_unexecuted_blocks=1 00:28:22.737 00:28:22.737 ' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:28:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.737 --rc genhtml_branch_coverage=1 00:28:22.737 --rc genhtml_function_coverage=1 00:28:22.737 --rc genhtml_legend=1 00:28:22.737 --rc geninfo_all_blocks=1 00:28:22.737 --rc geninfo_unexecuted_blocks=1 00:28:22.737 00:28:22.737 ' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:28:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.737 --rc genhtml_branch_coverage=1 00:28:22.737 --rc genhtml_function_coverage=1 00:28:22.737 --rc genhtml_legend=1 00:28:22.737 --rc geninfo_all_blocks=1 00:28:22.737 --rc geninfo_unexecuted_blocks=1 00:28:22.737 00:28:22.737 ' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:28:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.737 --rc genhtml_branch_coverage=1 00:28:22.737 --rc genhtml_function_coverage=1 00:28:22.737 --rc genhtml_legend=1 00:28:22.737 --rc geninfo_all_blocks=1 00:28:22.737 --rc geninfo_unexecuted_blocks=1 00:28:22.737 00:28:22.737 ' 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:22.737 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80179 00:28:22.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80179 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # '[' -z 80179 ']' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@843 -- # local max_retries=100 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@847 -- # xtrace_disable 00:28:22.738 08:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.738 [2024-11-20 08:41:10.280089] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:22.738 [2024-11-20 08:41:10.280448] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80179 ] 00:28:22.997 [2024-11-20 08:41:10.464638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.255 [2024-11-20 08:41:10.585074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@871 -- # return 0 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:24.192 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=basen1 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:28:24.452 { 00:28:24.452 "name": "basen1", 00:28:24.452 "aliases": [ 00:28:24.452 "8d670997-f040-4d87-b2d0-65aca735e41d" 00:28:24.452 ], 00:28:24.452 "product_name": "NVMe disk", 00:28:24.452 "block_size": 4096, 00:28:24.452 "num_blocks": 1310720, 00:28:24.452 "uuid": "8d670997-f040-4d87-b2d0-65aca735e41d", 00:28:24.452 "numa_id": -1, 00:28:24.452 "assigned_rate_limits": { 00:28:24.452 "rw_ios_per_sec": 0, 00:28:24.452 "rw_mbytes_per_sec": 0, 00:28:24.452 "r_mbytes_per_sec": 0, 00:28:24.452 "w_mbytes_per_sec": 0 00:28:24.452 }, 00:28:24.452 "claimed": true, 00:28:24.452 "claim_type": "read_many_write_one", 00:28:24.452 "zoned": false, 00:28:24.452 "supported_io_types": { 00:28:24.452 "read": true, 00:28:24.452 "write": true, 00:28:24.452 "unmap": true, 00:28:24.452 "flush": true, 00:28:24.452 "reset": true, 00:28:24.452 "nvme_admin": true, 00:28:24.452 "nvme_io": true, 00:28:24.452 "nvme_io_md": false, 00:28:24.452 "write_zeroes": true, 00:28:24.452 "zcopy": false, 00:28:24.452 "get_zone_info": false, 00:28:24.452 "zone_management": false, 00:28:24.452 "zone_append": false, 00:28:24.452 "compare": true, 00:28:24.452 "compare_and_write": false, 00:28:24.452 "abort": true, 00:28:24.452 "seek_hole": false, 00:28:24.452 "seek_data": false, 00:28:24.452 "copy": true, 00:28:24.452 "nvme_iov_md": false 00:28:24.452 }, 00:28:24.452 "driver_specific": { 00:28:24.452 "nvme": [ 00:28:24.452 { 00:28:24.452 "pci_address": "0000:00:11.0", 00:28:24.452 "trid": { 00:28:24.452 "trtype": "PCIe", 00:28:24.452 "traddr": "0000:00:11.0" 00:28:24.452 }, 00:28:24.452 "ctrlr_data": { 00:28:24.452 "cntlid": 0, 00:28:24.452 "vendor_id": "0x1b36", 00:28:24.452 "model_number": "QEMU NVMe Ctrl", 00:28:24.452 "serial_number": "12341", 00:28:24.452 "firmware_revision": "8.0.0", 00:28:24.452 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:24.452 "oacs": { 00:28:24.452 "security": 0, 00:28:24.452 "format": 1, 00:28:24.452 "firmware": 0, 00:28:24.452 "ns_manage": 1 00:28:24.452 }, 00:28:24.452 "multi_ctrlr": false, 00:28:24.452 "ana_reporting": false 00:28:24.452 }, 00:28:24.452 "vs": { 00:28:24.452 "nvme_version": "1.4" 00:28:24.452 }, 00:28:24.452 "ns_data": { 00:28:24.452 "id": 1, 00:28:24.452 "can_share": false 00:28:24.452 } 00:28:24.452 } 00:28:24.452 ], 00:28:24.452 "mp_policy": "active_passive" 00:28:24.452 } 00:28:24.452 } 00:28:24.452 ]' 00:28:24.452 08:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # nb=1310720 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=5120 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # echo 5120 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:24.711 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:24.970 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=257c5e22-6d3d-45b1-9abe-5c3bf9afd283 00:28:24.970 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:24.970 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 257c5e22-6d3d-45b1-9abe-5c3bf9afd283 00:28:24.970 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:25.229 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7f6ef4d4-fe1a-4704-8579-408f2ddee09f 00:28:25.229 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7f6ef4d4-fe1a-4704-8579-408f2ddee09f 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=89ccd34d-660e-40d7-a0c0-1469a80130d0 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 89ccd34d-660e-40d7-a0c0-1469a80130d0 ]] 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 89ccd34d-660e-40d7-a0c0-1469a80130d0 5120 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:25.487 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=89ccd34d-660e-40d7-a0c0-1469a80130d0 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 89ccd34d-660e-40d7-a0c0-1469a80130d0 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1370 -- # local bdev_name=89ccd34d-660e-40d7-a0c0-1469a80130d0 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1371 -- # local bdev_info 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1372 -- # local bs 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1373 -- # local nb 00:28:25.488 08:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89ccd34d-660e-40d7-a0c0-1469a80130d0 00:28:25.746 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # bdev_info='[ 00:28:25.746 { 00:28:25.746 "name": "89ccd34d-660e-40d7-a0c0-1469a80130d0", 00:28:25.746 "aliases": [ 00:28:25.746 "lvs/basen1p0" 00:28:25.746 ], 00:28:25.746 "product_name": "Logical Volume", 00:28:25.746 "block_size": 4096, 00:28:25.746 "num_blocks": 5242880, 00:28:25.746 "uuid": "89ccd34d-660e-40d7-a0c0-1469a80130d0", 00:28:25.746 "assigned_rate_limits": { 00:28:25.746 "rw_ios_per_sec": 0, 00:28:25.746 "rw_mbytes_per_sec": 0, 00:28:25.746 "r_mbytes_per_sec": 0, 00:28:25.746 "w_mbytes_per_sec": 0 00:28:25.746 }, 00:28:25.746 "claimed": false, 00:28:25.746 "zoned": false, 00:28:25.746 "supported_io_types": { 00:28:25.746 "read": true, 00:28:25.746 "write": true, 00:28:25.746 "unmap": true, 00:28:25.746 "flush": false, 00:28:25.746 "reset": true, 00:28:25.746 "nvme_admin": false, 00:28:25.746 "nvme_io": false, 00:28:25.746 "nvme_io_md": false, 00:28:25.747 "write_zeroes": true, 00:28:25.747 "zcopy": false, 00:28:25.747 "get_zone_info": false, 00:28:25.747 "zone_management": false, 00:28:25.747 "zone_append": false, 00:28:25.747 "compare": false, 00:28:25.747 "compare_and_write": false, 00:28:25.747 "abort": false, 00:28:25.747 "seek_hole": true, 00:28:25.747 "seek_data": true, 00:28:25.747 "copy": false, 00:28:25.747 "nvme_iov_md": false 00:28:25.747 }, 00:28:25.747 "driver_specific": { 00:28:25.747 "lvol": { 00:28:25.747 "lvol_store_uuid": "7f6ef4d4-fe1a-4704-8579-408f2ddee09f", 00:28:25.747 "base_bdev": "basen1", 00:28:25.747 "thin_provision": true, 00:28:25.747 "num_allocated_clusters": 0, 00:28:25.747 "snapshot": false, 00:28:25.747 "clone": false, 00:28:25.747 "esnap_clone": false 00:28:25.747 } 00:28:25.747 } 00:28:25.747 } 00:28:25.747 ]' 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # jq '.[] .block_size' 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # bs=4096 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # jq '.[] .num_blocks' 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # nb=5242880 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bdev_size=20480 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # echo 20480 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:25.747 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:26.005 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:26.005 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:26.005 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:26.264 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:26.264 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:26.264 08:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 89ccd34d-660e-40d7-a0c0-1469a80130d0 -c cachen1p0 --l2p_dram_limit 2 00:28:26.524 [2024-11-20 08:41:13.905769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.524 [2024-11-20 08:41:13.905858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:26.524 [2024-11-20 08:41:13.905882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:26.524 [2024-11-20 08:41:13.905894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.524 [2024-11-20 08:41:13.905982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.524 [2024-11-20 08:41:13.906014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:26.524 [2024-11-20 08:41:13.906032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:28:26.524 [2024-11-20 08:41:13.906047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.524 [2024-11-20 08:41:13.906089] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:26.524 [2024-11-20 08:41:13.907302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:26.524 [2024-11-20 08:41:13.907343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.524 [2024-11-20 08:41:13.907355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:26.524 [2024-11-20 08:41:13.907371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.261 ms 00:28:26.525 [2024-11-20 08:41:13.907382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.907477] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 18f613ac-8844-4956-8da6-509abc960fbc 00:28:26.525 [2024-11-20 08:41:13.909847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.909888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:26.525 [2024-11-20 08:41:13.909901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:26.525 [2024-11-20 08:41:13.909916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.923171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.923231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:26.525 [2024-11-20 08:41:13.923250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.210 ms 00:28:26.525 [2024-11-20 08:41:13.923265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.923318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.923335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:26.525 [2024-11-20 08:41:13.923347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:26.525 [2024-11-20 08:41:13.923365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.923457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.923474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:26.525 [2024-11-20 08:41:13.923484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:26.525 [2024-11-20 08:41:13.923505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.923534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:26.525 [2024-11-20 08:41:13.930159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.930195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:26.525 [2024-11-20 08:41:13.930213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.642 ms 00:28:26.525 [2024-11-20 08:41:13.930224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.930260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.930272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:26.525 [2024-11-20 08:41:13.930287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:26.525 [2024-11-20 08:41:13.930298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.930338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:26.525 [2024-11-20 08:41:13.930477] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:26.525 [2024-11-20 08:41:13.930501] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:26.525 [2024-11-20 08:41:13.930516] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:26.525 [2024-11-20 08:41:13.930533] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:26.525 [2024-11-20 08:41:13.930546] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:26.525 [2024-11-20 08:41:13.930562] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:26.525 [2024-11-20 08:41:13.930573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:26.525 [2024-11-20 08:41:13.930589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:26.525 [2024-11-20 08:41:13.930600] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:26.525 [2024-11-20 08:41:13.930614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.930625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:26.525 [2024-11-20 08:41:13.930639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:28:26.525 [2024-11-20 08:41:13.930650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.930728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.525 [2024-11-20 08:41:13.930739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:26.525 [2024-11-20 08:41:13.930755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:26.525 [2024-11-20 08:41:13.930778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.525 [2024-11-20 08:41:13.930883] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:26.525 [2024-11-20 08:41:13.930902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:26.525 [2024-11-20 08:41:13.930918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:26.525 [2024-11-20 08:41:13.930929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.930943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:26.525 [2024-11-20 08:41:13.930952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.930965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:26.525 [2024-11-20 08:41:13.930975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:26.525 [2024-11-20 08:41:13.930999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:26.525 [2024-11-20 08:41:13.931010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.931023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:26.525 [2024-11-20 08:41:13.931033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:26.525 [2024-11-20 08:41:13.931046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.931056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:26.525 [2024-11-20 08:41:13.931069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:26.525 [2024-11-20 08:41:13.931079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.931094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:26.525 [2024-11-20 08:41:13.931103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:26.525 [2024-11-20 08:41:13.931117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.525 [2024-11-20 08:41:13.931126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:26.525 [2024-11-20 08:41:13.931139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:26.525 [2024-11-20 08:41:13.931148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:26.525 [2024-11-20 08:41:13.931161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:26.525 [2024-11-20 08:41:13.931171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:26.525 [2024-11-20 08:41:13.931184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:26.525 [2024-11-20 08:41:13.931193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:26.525 [2024-11-20 08:41:13.931206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:26.526 [2024-11-20 08:41:13.931215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:26.526 [2024-11-20 08:41:13.931227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:26.526 [2024-11-20 08:41:13.931238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:26.526 [2024-11-20 08:41:13.931250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:26.526 [2024-11-20 08:41:13.931259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:26.526 [2024-11-20 08:41:13.931274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:26.526 [2024-11-20 08:41:13.931283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:26.526 [2024-11-20 08:41:13.931304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:26.526 [2024-11-20 08:41:13.931315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:26.526 [2024-11-20 08:41:13.931337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:26.526 [2024-11-20 08:41:13.931370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:26.526 [2024-11-20 08:41:13.931382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:26.526 [2024-11-20 08:41:13.931404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:26.526 [2024-11-20 08:41:13.931415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:26.526 [2024-11-20 08:41:13.931431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:26.526 [2024-11-20 08:41:13.931449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:26.526 [2024-11-20 08:41:13.931465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:26.526 [2024-11-20 08:41:13.931475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:26.526 [2024-11-20 08:41:13.931488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:26.526 [2024-11-20 08:41:13.931497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:26.526 [2024-11-20 08:41:13.931510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:26.526 [2024-11-20 08:41:13.931525] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:26.526 [2024-11-20 08:41:13.931541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:26.526 [2024-11-20 08:41:13.931571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:26.526 [2024-11-20 08:41:13.931607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:26.526 [2024-11-20 08:41:13.931621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:26.526 [2024-11-20 08:41:13.931631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:26.526 [2024-11-20 08:41:13.931644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:26.526 [2024-11-20 08:41:13.931733] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:26.526 [2024-11-20 08:41:13.931751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:26.526 [2024-11-20 08:41:13.931777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:26.526 [2024-11-20 08:41:13.931788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:26.526 [2024-11-20 08:41:13.931801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:26.526 [2024-11-20 08:41:13.931812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.526 [2024-11-20 08:41:13.931826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:26.526 [2024-11-20 08:41:13.931836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:28:26.526 [2024-11-20 08:41:13.931851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.526 [2024-11-20 08:41:13.931899] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:26.526 [2024-11-20 08:41:13.931924] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:30.717 [2024-11-20 08:41:17.610356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.610457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:30.717 [2024-11-20 08:41:17.610478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3684.424 ms 00:28:30.717 [2024-11-20 08:41:17.610493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.659039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.659117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:30.717 [2024-11-20 08:41:17.659135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.183 ms 00:28:30.717 [2024-11-20 08:41:17.659150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.659307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.659326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:30.717 [2024-11-20 08:41:17.659338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:30.717 [2024-11-20 08:41:17.659357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.714050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.714130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:30.717 [2024-11-20 08:41:17.714149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.699 ms 00:28:30.717 [2024-11-20 08:41:17.714164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.714238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.714260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:30.717 [2024-11-20 08:41:17.714272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:30.717 [2024-11-20 08:41:17.714287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.715118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.715146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:30.717 [2024-11-20 08:41:17.715158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.739 ms 00:28:30.717 [2024-11-20 08:41:17.715173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.715240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.715255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:30.717 [2024-11-20 08:41:17.715270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:30.717 [2024-11-20 08:41:17.715288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.740297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.740371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:30.717 [2024-11-20 08:41:17.740390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.023 ms 00:28:30.717 [2024-11-20 08:41:17.740405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.755627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:30.717 [2024-11-20 08:41:17.757304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.757332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:30.717 [2024-11-20 08:41:17.757352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.762 ms 00:28:30.717 [2024-11-20 08:41:17.757364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.805288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.805353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:30.717 [2024-11-20 08:41:17.805376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.934 ms 00:28:30.717 [2024-11-20 08:41:17.805388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.805509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.805529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:30.717 [2024-11-20 08:41:17.805548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:28:30.717 [2024-11-20 08:41:17.805560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.843224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.843281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:30.717 [2024-11-20 08:41:17.843303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.657 ms 00:28:30.717 [2024-11-20 08:41:17.843314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.881239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.881328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:30.717 [2024-11-20 08:41:17.881351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.923 ms 00:28:30.717 [2024-11-20 08:41:17.881363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.717 [2024-11-20 08:41:17.882151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.717 [2024-11-20 08:41:17.882179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:30.717 [2024-11-20 08:41:17.882196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.730 ms 00:28:30.717 [2024-11-20 08:41:17.882208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:17.992845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:17.992918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:30.718 [2024-11-20 08:41:17.992950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 110.694 ms 00:28:30.718 [2024-11-20 08:41:17.992963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.033254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:18.033338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:30.718 [2024-11-20 08:41:18.033377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.199 ms 00:28:30.718 [2024-11-20 08:41:18.033389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.074743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:18.074811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:30.718 [2024-11-20 08:41:18.074834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.323 ms 00:28:30.718 [2024-11-20 08:41:18.074845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.115926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:18.116008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:30.718 [2024-11-20 08:41:18.116032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.074 ms 00:28:30.718 [2024-11-20 08:41:18.116044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.116147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:18.116162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:30.718 [2024-11-20 08:41:18.116182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:30.718 [2024-11-20 08:41:18.116194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.116357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.718 [2024-11-20 08:41:18.116371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:30.718 [2024-11-20 08:41:18.116391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:28:30.718 [2024-11-20 08:41:18.116402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.718 [2024-11-20 08:41:18.117854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4218.394 ms, result 0 00:28:30.718 { 00:28:30.718 "name": "ftl", 00:28:30.718 "uuid": "18f613ac-8844-4956-8da6-509abc960fbc" 00:28:30.718 } 00:28:30.718 08:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:30.977 [2024-11-20 08:41:18.328370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.977 08:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:31.236 08:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:31.236 [2024-11-20 08:41:18.752429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:31.236 08:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:31.495 [2024-11-20 08:41:18.971213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:31.495 08:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:32.067 Fill FTL, iteration 1 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80301 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80301 /var/tmp/spdk.tgt.sock 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # '[' -z 80301 ']' 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@843 -- # local max_retries=100 00:28:32.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@847 -- # xtrace_disable 00:28:32.067 08:41:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.067 [2024-11-20 08:41:19.449798] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:32.067 [2024-11-20 08:41:19.449926] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80301 ] 00:28:32.327 [2024-11-20 08:41:19.632262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.327 [2024-11-20 08:41:19.748138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.263 08:41:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:28:33.263 08:41:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@871 -- # return 0 00:28:33.263 08:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:33.522 ftln1 00:28:33.522 08:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:33.522 08:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80301 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' -z 80301 ']' 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@961 -- # kill -0 80301 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # uname 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80301 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:28:33.781 killing process with pid 80301 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80301' 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # kill 80301 00:28:33.781 08:41:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@981 -- # wait 80301 00:28:36.319 08:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:36.319 08:41:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:36.319 [2024-11-20 08:41:23.652780] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:36.319 [2024-11-20 08:41:23.652905] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80365 ] 00:28:36.319 [2024-11-20 08:41:23.830803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.578 [2024-11-20 08:41:23.945124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.958  [2024-11-20T08:41:26.457Z] Copying: 241/1024 [MB] (241 MBps) [2024-11-20T08:41:27.416Z] Copying: 486/1024 [MB] (245 MBps) [2024-11-20T08:41:28.842Z] Copying: 729/1024 [MB] (243 MBps) [2024-11-20T08:41:28.842Z] Copying: 973/1024 [MB] (244 MBps) [2024-11-20T08:41:29.780Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:28:42.219 00:28:42.219 Calculate MD5 checksum, iteration 1 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:42.219 08:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:42.478 [2024-11-20 08:41:29.803395] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:42.478 [2024-11-20 08:41:29.804083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80431 ] 00:28:42.478 [2024-11-20 08:41:29.985206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.738 [2024-11-20 08:41:30.107834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.114  [2024-11-20T08:41:32.612Z] Copying: 616/1024 [MB] (616 MBps) [2024-11-20T08:41:33.547Z] Copying: 1024/1024 [MB] (average 585 MBps) 00:28:45.986 00:28:45.986 08:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:45.986 08:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:47.893 Fill FTL, iteration 2 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=92ff8a19bf8c681adbedda5859908408 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.893 08:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.894 [2024-11-20 08:41:35.268857] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:47.894 [2024-11-20 08:41:35.268974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80488 ] 00:28:47.894 [2024-11-20 08:41:35.435754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.152 [2024-11-20 08:41:35.557321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.530  [2024-11-20T08:41:38.027Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-20T08:41:39.407Z] Copying: 474/1024 [MB] (234 MBps) [2024-11-20T08:41:40.349Z] Copying: 706/1024 [MB] (232 MBps) [2024-11-20T08:41:40.608Z] Copying: 942/1024 [MB] (236 MBps) [2024-11-20T08:41:41.548Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:28:53.987 00:28:53.987 Calculate MD5 checksum, iteration 2 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:53.987 08:41:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:54.247 [2024-11-20 08:41:41.580304] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:28:54.247 [2024-11-20 08:41:41.580630] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80556 ] 00:28:54.247 [2024-11-20 08:41:41.762949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.506 [2024-11-20 08:41:41.876345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.446  [2024-11-20T08:41:44.266Z] Copying: 704/1024 [MB] (704 MBps) [2024-11-20T08:41:45.644Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:28:58.083 00:28:58.083 08:41:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:58.083 08:41:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=12c0d0e1e7a0fc9b7bfb962bc5522438 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:59.990 [2024-11-20 08:41:47.406278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.990 [2024-11-20 08:41:47.406339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:59.990 [2024-11-20 08:41:47.406357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:59.990 [2024-11-20 08:41:47.406368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.990 [2024-11-20 08:41:47.406396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.990 [2024-11-20 08:41:47.406408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:59.990 [2024-11-20 08:41:47.406429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:59.990 [2024-11-20 08:41:47.406443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.990 [2024-11-20 08:41:47.406465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.990 [2024-11-20 08:41:47.406476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:59.990 [2024-11-20 08:41:47.406487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.990 [2024-11-20 08:41:47.406497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.990 [2024-11-20 08:41:47.406559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.274 ms, result 0 00:28:59.990 true 00:28:59.990 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.248 { 00:29:00.248 "name": "ftl", 00:29:00.248 "properties": [ 00:29:00.248 { 00:29:00.248 "name": "superblock_version", 00:29:00.248 "value": 5, 00:29:00.248 "read-only": true 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "name": "base_device", 00:29:00.248 "bands": [ 00:29:00.248 { 00:29:00.248 "id": 0, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 1, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 2, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 3, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 4, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 5, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 6, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 7, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.248 }, 00:29:00.248 { 00:29:00.248 "id": 8, 00:29:00.248 "state": "FREE", 00:29:00.248 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 9, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 10, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 11, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 12, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 13, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 14, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 15, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 16, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 17, 00:29:00.249 "state": "FREE", 00:29:00.249 "validity": 0.0 00:29:00.249 } 00:29:00.249 ], 00:29:00.249 "read-only": true 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "name": "cache_device", 00:29:00.249 "type": "bdev", 00:29:00.249 "chunks": [ 00:29:00.249 { 00:29:00.249 "id": 0, 00:29:00.249 "state": "INACTIVE", 00:29:00.249 "utilization": 0.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 1, 00:29:00.249 "state": "CLOSED", 00:29:00.249 "utilization": 1.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 2, 00:29:00.249 "state": "CLOSED", 00:29:00.249 "utilization": 1.0 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 3, 00:29:00.249 "state": "OPEN", 00:29:00.249 "utilization": 0.001953125 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "id": 4, 00:29:00.249 "state": "OPEN", 00:29:00.249 "utilization": 0.0 00:29:00.249 } 00:29:00.249 ], 00:29:00.249 "read-only": true 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "name": "verbose_mode", 00:29:00.249 "value": true, 00:29:00.249 "unit": "", 00:29:00.249 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:00.249 }, 00:29:00.249 { 00:29:00.249 "name": "prep_upgrade_on_shutdown", 00:29:00.249 "value": false, 00:29:00.249 "unit": "", 00:29:00.249 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:00.249 } 00:29:00.249 ] 00:29:00.249 } 00:29:00.249 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:00.508 [2024-11-20 08:41:47.826291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.508 [2024-11-20 08:41:47.826346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.508 [2024-11-20 08:41:47.826361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:00.508 [2024-11-20 08:41:47.826371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.508 [2024-11-20 08:41:47.826396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.508 [2024-11-20 08:41:47.826408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.508 [2024-11-20 08:41:47.826420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.508 [2024-11-20 08:41:47.826429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.508 [2024-11-20 08:41:47.826449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.508 [2024-11-20 08:41:47.826460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.508 [2024-11-20 08:41:47.826470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.508 [2024-11-20 08:41:47.826479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.508 [2024-11-20 08:41:47.826538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.236 ms, result 0 00:29:00.508 true 00:29:00.508 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:00.508 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.508 08:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:00.766 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:00.766 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:00.766 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:00.766 [2024-11-20 08:41:48.270246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.766 [2024-11-20 08:41:48.270301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.766 [2024-11-20 08:41:48.270317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:00.766 [2024-11-20 08:41:48.270328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.766 [2024-11-20 08:41:48.270354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.766 [2024-11-20 08:41:48.270365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.766 [2024-11-20 08:41:48.270374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:00.766 [2024-11-20 08:41:48.270384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.766 [2024-11-20 08:41:48.270404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.766 [2024-11-20 08:41:48.270415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.766 [2024-11-20 08:41:48.270425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.766 [2024-11-20 08:41:48.270434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.766 [2024-11-20 08:41:48.270493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:29:00.766 true 00:29:00.766 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:01.025 { 00:29:01.025 "name": "ftl", 00:29:01.025 "properties": [ 00:29:01.025 { 00:29:01.025 "name": "superblock_version", 00:29:01.025 "value": 5, 00:29:01.025 "read-only": true 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "name": "base_device", 00:29:01.025 "bands": [ 00:29:01.025 { 00:29:01.025 "id": 0, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 1, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 2, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 3, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 4, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 5, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 6, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 7, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 8, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 9, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 10, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 11, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.025 "id": 12, 00:29:01.025 "state": "FREE", 00:29:01.025 "validity": 0.0 00:29:01.025 }, 00:29:01.025 { 00:29:01.026 "id": 13, 00:29:01.026 "state": "FREE", 00:29:01.026 "validity": 0.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 14, 00:29:01.026 "state": "FREE", 00:29:01.026 "validity": 0.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 15, 00:29:01.026 "state": "FREE", 00:29:01.026 "validity": 0.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 16, 00:29:01.026 "state": "FREE", 00:29:01.026 "validity": 0.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 17, 00:29:01.026 "state": "FREE", 00:29:01.026 "validity": 0.0 00:29:01.026 } 00:29:01.026 ], 00:29:01.026 "read-only": true 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "name": "cache_device", 00:29:01.026 "type": "bdev", 00:29:01.026 "chunks": [ 00:29:01.026 { 00:29:01.026 "id": 0, 00:29:01.026 "state": "INACTIVE", 00:29:01.026 "utilization": 0.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 1, 00:29:01.026 "state": "CLOSED", 00:29:01.026 "utilization": 1.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 2, 00:29:01.026 "state": "CLOSED", 00:29:01.026 "utilization": 1.0 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 3, 00:29:01.026 "state": "OPEN", 00:29:01.026 "utilization": 0.001953125 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "id": 4, 00:29:01.026 "state": "OPEN", 00:29:01.026 "utilization": 0.0 00:29:01.026 } 00:29:01.026 ], 00:29:01.026 "read-only": true 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "name": "verbose_mode", 00:29:01.026 "value": true, 00:29:01.026 "unit": "", 00:29:01.026 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:01.026 }, 00:29:01.026 { 00:29:01.026 "name": "prep_upgrade_on_shutdown", 00:29:01.026 "value": true, 00:29:01.026 "unit": "", 00:29:01.026 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:01.026 } 00:29:01.026 ] 00:29:01.026 } 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80179 ]] 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80179 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' -z 80179 ']' 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@961 -- # kill -0 80179 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # uname 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80179 00:29:01.026 killing process with pid 80179 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80179' 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # kill 80179 00:29:01.026 08:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@981 -- # wait 80179 00:29:02.401 [2024-11-20 08:41:49.673599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:02.401 [2024-11-20 08:41:49.693466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.401 [2024-11-20 08:41:49.693519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:02.401 [2024-11-20 08:41:49.693535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.401 [2024-11-20 08:41:49.693545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.401 [2024-11-20 08:41:49.693568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:02.401 [2024-11-20 08:41:49.697928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.401 [2024-11-20 08:41:49.697956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:02.401 [2024-11-20 08:41:49.697969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.350 ms 00:29:02.401 [2024-11-20 08:41:49.697979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.884480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.884551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:10.528 [2024-11-20 08:41:56.884568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7198.130 ms 00:29:10.528 [2024-11-20 08:41:56.884585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.885757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.885779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:10.528 [2024-11-20 08:41:56.885792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:29:10.528 [2024-11-20 08:41:56.885802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.886745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.886771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:10.528 [2024-11-20 08:41:56.886784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:29:10.528 [2024-11-20 08:41:56.886794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.901718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.901761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:10.528 [2024-11-20 08:41:56.901774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.897 ms 00:29:10.528 [2024-11-20 08:41:56.901784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.910874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.910916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:10.528 [2024-11-20 08:41:56.910931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.068 ms 00:29:10.528 [2024-11-20 08:41:56.910942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.911051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.911065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:10.528 [2024-11-20 08:41:56.911085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:10.528 [2024-11-20 08:41:56.911095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.925702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.925742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:10.528 [2024-11-20 08:41:56.925755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.613 ms 00:29:10.528 [2024-11-20 08:41:56.925765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.940257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.940294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:10.528 [2024-11-20 08:41:56.940307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.480 ms 00:29:10.528 [2024-11-20 08:41:56.940317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.954591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.954629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:10.528 [2024-11-20 08:41:56.954641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.263 ms 00:29:10.528 [2024-11-20 08:41:56.954650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.968967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.528 [2024-11-20 08:41:56.969009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:10.528 [2024-11-20 08:41:56.969022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.266 ms 00:29:10.528 [2024-11-20 08:41:56.969031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.528 [2024-11-20 08:41:56.969064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:10.528 [2024-11-20 08:41:56.969080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:10.529 [2024-11-20 08:41:56.969093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:10.529 [2024-11-20 08:41:56.969117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:10.529 [2024-11-20 08:41:56.969129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:10.529 [2024-11-20 08:41:56.969288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:10.529 [2024-11-20 08:41:56.969298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 18f613ac-8844-4956-8da6-509abc960fbc 00:29:10.529 [2024-11-20 08:41:56.969309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:10.529 [2024-11-20 08:41:56.969319] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:10.529 [2024-11-20 08:41:56.969329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:10.529 [2024-11-20 08:41:56.969339] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:10.529 [2024-11-20 08:41:56.969349] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:10.529 [2024-11-20 08:41:56.969364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:10.529 [2024-11-20 08:41:56.969374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:10.529 [2024-11-20 08:41:56.969383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:10.529 [2024-11-20 08:41:56.969393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:10.529 [2024-11-20 08:41:56.969407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.529 [2024-11-20 08:41:56.969421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:10.529 [2024-11-20 08:41:56.969431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:29:10.529 [2024-11-20 08:41:56.969441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:56.989296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.529 [2024-11-20 08:41:56.989353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:10.529 [2024-11-20 08:41:56.989370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.854 ms 00:29:10.529 [2024-11-20 08:41:56.989390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:56.989901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.529 [2024-11-20 08:41:56.989912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:10.529 [2024-11-20 08:41:56.989924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:29:10.529 [2024-11-20 08:41:56.989933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.054528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.054590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:10.529 [2024-11-20 08:41:57.054611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.054621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.054677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.054687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:10.529 [2024-11-20 08:41:57.054698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.054708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.054820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.054834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:10.529 [2024-11-20 08:41:57.054844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.054854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.054878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.054889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:10.529 [2024-11-20 08:41:57.054899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.054909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.177690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.177748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:10.529 [2024-11-20 08:41:57.177764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.177781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:10.529 [2024-11-20 08:41:57.277300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:10.529 [2024-11-20 08:41:57.277438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:10.529 [2024-11-20 08:41:57.277521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:10.529 [2024-11-20 08:41:57.277664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:10.529 [2024-11-20 08:41:57.277740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:10.529 [2024-11-20 08:41:57.277819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.277876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:10.529 [2024-11-20 08:41:57.277889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:10.529 [2024-11-20 08:41:57.277900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:10.529 [2024-11-20 08:41:57.277909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.529 [2024-11-20 08:41:57.278040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7596.849 ms, result 0 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80753 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80753 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # '[' -z 80753 ']' 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@843 -- # local max_retries=100 00:29:13.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@847 -- # xtrace_disable 00:29:13.816 08:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.816 [2024-11-20 08:42:00.759297] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:13.816 [2024-11-20 08:42:00.759530] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80753 ] 00:29:13.816 [2024-11-20 08:42:00.943852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.816 [2024-11-20 08:42:01.061560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.421 [2024-11-20 08:42:01.972802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:14.421 [2024-11-20 08:42:01.972871] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:14.681 [2024-11-20 08:42:02.119749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.119808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:14.681 [2024-11-20 08:42:02.119824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.681 [2024-11-20 08:42:02.119835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.119885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.119898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:14.681 [2024-11-20 08:42:02.119909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:14.681 [2024-11-20 08:42:02.119918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.119947] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:14.681 [2024-11-20 08:42:02.121036] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:14.681 [2024-11-20 08:42:02.121063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.121074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:14.681 [2024-11-20 08:42:02.121085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:29:14.681 [2024-11-20 08:42:02.121094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.122535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:14.681 [2024-11-20 08:42:02.142587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.142630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:14.681 [2024-11-20 08:42:02.142651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.084 ms 00:29:14.681 [2024-11-20 08:42:02.142662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.142724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.142737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:14.681 [2024-11-20 08:42:02.142748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:14.681 [2024-11-20 08:42:02.142758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.149561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.149597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:14.681 [2024-11-20 08:42:02.149610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.733 ms 00:29:14.681 [2024-11-20 08:42:02.149620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.149682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.149696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:14.681 [2024-11-20 08:42:02.149707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:14.681 [2024-11-20 08:42:02.149717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.149762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.149774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:14.681 [2024-11-20 08:42:02.149789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:14.681 [2024-11-20 08:42:02.149799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.681 [2024-11-20 08:42:02.149825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:14.681 [2024-11-20 08:42:02.154631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.681 [2024-11-20 08:42:02.154664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:14.681 [2024-11-20 08:42:02.154675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.819 ms 00:29:14.681 [2024-11-20 08:42:02.154689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.682 [2024-11-20 08:42:02.154718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.682 [2024-11-20 08:42:02.154729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:14.682 [2024-11-20 08:42:02.154739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:14.682 [2024-11-20 08:42:02.154749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.682 [2024-11-20 08:42:02.154806] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:14.682 [2024-11-20 08:42:02.154829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:14.682 [2024-11-20 08:42:02.154866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:14.682 [2024-11-20 08:42:02.154884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:14.682 [2024-11-20 08:42:02.154981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:14.682 [2024-11-20 08:42:02.155006] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:14.682 [2024-11-20 08:42:02.155019] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:14.682 [2024-11-20 08:42:02.155032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155057] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:14.682 [2024-11-20 08:42:02.155067] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:14.682 [2024-11-20 08:42:02.155077] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:14.682 [2024-11-20 08:42:02.155087] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:14.682 [2024-11-20 08:42:02.155098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.682 [2024-11-20 08:42:02.155108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:14.682 [2024-11-20 08:42:02.155118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:29:14.682 [2024-11-20 08:42:02.155128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.682 [2024-11-20 08:42:02.155201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.682 [2024-11-20 08:42:02.155213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:14.682 [2024-11-20 08:42:02.155222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:14.682 [2024-11-20 08:42:02.155236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.682 [2024-11-20 08:42:02.155327] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:14.682 [2024-11-20 08:42:02.155339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:14.682 [2024-11-20 08:42:02.155350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:14.682 [2024-11-20 08:42:02.155380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:14.682 [2024-11-20 08:42:02.155399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:14.682 [2024-11-20 08:42:02.155408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:14.682 [2024-11-20 08:42:02.155417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:14.682 [2024-11-20 08:42:02.155439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:14.682 [2024-11-20 08:42:02.155448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:14.682 [2024-11-20 08:42:02.155467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:14.682 [2024-11-20 08:42:02.155476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:14.682 [2024-11-20 08:42:02.155495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:14.682 [2024-11-20 08:42:02.155503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:14.682 [2024-11-20 08:42:02.155522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:14.682 [2024-11-20 08:42:02.155549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:14.682 [2024-11-20 08:42:02.155588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:14.682 [2024-11-20 08:42:02.155615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:14.682 [2024-11-20 08:42:02.155643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:14.682 [2024-11-20 08:42:02.155670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:14.682 [2024-11-20 08:42:02.155698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:14.682 [2024-11-20 08:42:02.155725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:14.682 [2024-11-20 08:42:02.155736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:14.682 [2024-11-20 08:42:02.155754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:14.682 [2024-11-20 08:42:02.155764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.682 [2024-11-20 08:42:02.155787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:14.682 [2024-11-20 08:42:02.155797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:14.682 [2024-11-20 08:42:02.155806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:14.682 [2024-11-20 08:42:02.155815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:14.682 [2024-11-20 08:42:02.155824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:14.682 [2024-11-20 08:42:02.155833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:14.682 [2024-11-20 08:42:02.155844] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:14.682 [2024-11-20 08:42:02.155856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:14.682 [2024-11-20 08:42:02.155877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:14.682 [2024-11-20 08:42:02.155908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:14.682 [2024-11-20 08:42:02.155918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:14.682 [2024-11-20 08:42:02.155928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:14.682 [2024-11-20 08:42:02.155938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.155999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.156010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:14.682 [2024-11-20 08:42:02.156021] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:14.682 [2024-11-20 08:42:02.156032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.156043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.682 [2024-11-20 08:42:02.156054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:14.683 [2024-11-20 08:42:02.156064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:14.683 [2024-11-20 08:42:02.156078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:14.683 [2024-11-20 08:42:02.156089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.683 [2024-11-20 08:42:02.156099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:14.683 [2024-11-20 08:42:02.156109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:29:14.683 [2024-11-20 08:42:02.156118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.683 [2024-11-20 08:42:02.156164] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:14.683 [2024-11-20 08:42:02.156177] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:18.874 [2024-11-20 08:42:05.920507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:05.920572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:18.875 [2024-11-20 08:42:05.920590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3770.453 ms 00:29:18.875 [2024-11-20 08:42:05.920617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:05.958543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:05.958599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:18.875 [2024-11-20 08:42:05.958614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.557 ms 00:29:18.875 [2024-11-20 08:42:05.958625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:05.958733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:05.958751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:18.875 [2024-11-20 08:42:05.958763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:18.875 [2024-11-20 08:42:05.958772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.005709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.005765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:18.875 [2024-11-20 08:42:06.005779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.948 ms 00:29:18.875 [2024-11-20 08:42:06.005793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.005872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.005884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:18.875 [2024-11-20 08:42:06.005895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:18.875 [2024-11-20 08:42:06.005905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.006414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.006439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:18.875 [2024-11-20 08:42:06.006450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:29:18.875 [2024-11-20 08:42:06.006460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.006514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.006525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:18.875 [2024-11-20 08:42:06.006536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:18.875 [2024-11-20 08:42:06.006546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.027207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.027255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:18.875 [2024-11-20 08:42:06.027269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.672 ms 00:29:18.875 [2024-11-20 08:42:06.027279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.045961] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:18.875 [2024-11-20 08:42:06.046008] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:18.875 [2024-11-20 08:42:06.046023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.046034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:18.875 [2024-11-20 08:42:06.046054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.634 ms 00:29:18.875 [2024-11-20 08:42:06.046064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.065581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.065634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:18.875 [2024-11-20 08:42:06.065664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.503 ms 00:29:18.875 [2024-11-20 08:42:06.065675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.083453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.083493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:18.875 [2024-11-20 08:42:06.083505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.759 ms 00:29:18.875 [2024-11-20 08:42:06.083515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.101644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.101680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:18.875 [2024-11-20 08:42:06.101693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.101 ms 00:29:18.875 [2024-11-20 08:42:06.101703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.102484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.102522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:18.875 [2024-11-20 08:42:06.102535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:29:18.875 [2024-11-20 08:42:06.102545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.199415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.199483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:18.875 [2024-11-20 08:42:06.199499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.002 ms 00:29:18.875 [2024-11-20 08:42:06.199511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.210496] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:18.875 [2024-11-20 08:42:06.211405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.211435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:18.875 [2024-11-20 08:42:06.211449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.843 ms 00:29:18.875 [2024-11-20 08:42:06.211460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.211572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.211588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:18.875 [2024-11-20 08:42:06.211599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:18.875 [2024-11-20 08:42:06.211610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.211675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.211688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:18.875 [2024-11-20 08:42:06.211699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:18.875 [2024-11-20 08:42:06.211708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.211731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.211742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:18.875 [2024-11-20 08:42:06.211752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:18.875 [2024-11-20 08:42:06.211766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.211800] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:18.875 [2024-11-20 08:42:06.211813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.211823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:18.875 [2024-11-20 08:42:06.211833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:18.875 [2024-11-20 08:42:06.211843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.248766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.248817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:18.875 [2024-11-20 08:42:06.248831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.953 ms 00:29:18.875 [2024-11-20 08:42:06.248842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.248921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.875 [2024-11-20 08:42:06.248934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:18.875 [2024-11-20 08:42:06.248945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:18.875 [2024-11-20 08:42:06.248955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.875 [2024-11-20 08:42:06.250146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4136.591 ms, result 0 00:29:18.875 [2024-11-20 08:42:06.265125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.875 [2024-11-20 08:42:06.281118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:18.875 [2024-11-20 08:42:06.290138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@871 -- # return 0 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:19.444 [2024-11-20 08:42:06.925404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.444 [2024-11-20 08:42:06.925457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:19.444 [2024-11-20 08:42:06.925472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:19.444 [2024-11-20 08:42:06.925486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.444 [2024-11-20 08:42:06.925513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.444 [2024-11-20 08:42:06.925524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:19.444 [2024-11-20 08:42:06.925535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.444 [2024-11-20 08:42:06.925545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.444 [2024-11-20 08:42:06.925565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.444 [2024-11-20 08:42:06.925576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:19.444 [2024-11-20 08:42:06.925586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.444 [2024-11-20 08:42:06.925596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.444 [2024-11-20 08:42:06.925655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:29:19.444 true 00:29:19.444 08:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.702 { 00:29:19.702 "name": "ftl", 00:29:19.702 "properties": [ 00:29:19.702 { 00:29:19.702 "name": "superblock_version", 00:29:19.702 "value": 5, 00:29:19.702 "read-only": true 00:29:19.702 }, 00:29:19.702 { 00:29:19.702 "name": "base_device", 00:29:19.702 "bands": [ 00:29:19.703 { 00:29:19.703 "id": 0, 00:29:19.703 "state": "CLOSED", 00:29:19.703 "validity": 1.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 1, 00:29:19.703 "state": "CLOSED", 00:29:19.703 "validity": 1.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 2, 00:29:19.703 "state": "CLOSED", 00:29:19.703 "validity": 0.007843137254901933 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 3, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 4, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 5, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 6, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 7, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 8, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 9, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 10, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 11, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 12, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 13, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 14, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 15, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 16, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 17, 00:29:19.703 "state": "FREE", 00:29:19.703 "validity": 0.0 00:29:19.703 } 00:29:19.703 ], 00:29:19.703 "read-only": true 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "name": "cache_device", 00:29:19.703 "type": "bdev", 00:29:19.703 "chunks": [ 00:29:19.703 { 00:29:19.703 "id": 0, 00:29:19.703 "state": "INACTIVE", 00:29:19.703 "utilization": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 1, 00:29:19.703 "state": "OPEN", 00:29:19.703 "utilization": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 2, 00:29:19.703 "state": "OPEN", 00:29:19.703 "utilization": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 3, 00:29:19.703 "state": "FREE", 00:29:19.703 "utilization": 0.0 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "id": 4, 00:29:19.703 "state": "FREE", 00:29:19.703 "utilization": 0.0 00:29:19.703 } 00:29:19.703 ], 00:29:19.703 "read-only": true 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "name": "verbose_mode", 00:29:19.703 "value": true, 00:29:19.703 "unit": "", 00:29:19.703 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:19.703 }, 00:29:19.703 { 00:29:19.703 "name": "prep_upgrade_on_shutdown", 00:29:19.703 "value": false, 00:29:19.703 "unit": "", 00:29:19.703 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:19.703 } 00:29:19.703 ] 00:29:19.703 } 00:29:19.703 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:19.703 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:19.703 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.960 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:19.960 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:19.960 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:19.960 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.961 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:20.238 Validate MD5 checksum, iteration 1 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.238 08:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.238 [2024-11-20 08:42:07.672856] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:20.238 [2024-11-20 08:42:07.672970] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80835 ] 00:29:20.503 [2024-11-20 08:42:07.853654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.503 [2024-11-20 08:42:07.976528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.405  [2024-11-20T08:42:10.247Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-20T08:42:11.623Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:29:24.062 00:29:24.321 08:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:24.321 08:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=92ff8a19bf8c681adbedda5859908408 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 92ff8a19bf8c681adbedda5859908408 != \9\2\f\f\8\a\1\9\b\f\8\c\6\8\1\a\d\b\e\d\d\a\5\8\5\9\9\0\8\4\0\8 ]] 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:26.235 Validate MD5 checksum, iteration 2 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:26.235 08:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:26.235 [2024-11-20 08:42:13.449006] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:26.235 [2024-11-20 08:42:13.449132] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80903 ] 00:29:26.235 [2024-11-20 08:42:13.631177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.235 [2024-11-20 08:42:13.750791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.140  [2024-11-20T08:42:15.960Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-20T08:42:17.358Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:29:29.797 00:29:29.797 08:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:29.797 08:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12c0d0e1e7a0fc9b7bfb962bc5522438 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12c0d0e1e7a0fc9b7bfb962bc5522438 != \1\2\c\0\d\0\e\1\e\7\a\0\f\c\9\b\7\b\f\b\9\6\2\b\c\5\5\2\2\4\3\8 ]] 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80753 ]] 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80753 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80972 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80972 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # '[' -z 80972 ']' 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@843 -- # local max_retries=100 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@847 -- # xtrace_disable 00:29:31.702 08:42:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.702 [2024-11-20 08:42:19.144068] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:31.702 [2024-11-20 08:42:19.144199] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80972 ] 00:29:31.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 837: 80753 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:31.962 [2024-11-20 08:42:19.327275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.962 [2024-11-20 08:42:19.444031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.899 [2024-11-20 08:42:20.397568] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:32.899 [2024-11-20 08:42:20.397634] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:33.159 [2024-11-20 08:42:20.543950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.159 [2024-11-20 08:42:20.544034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:33.159 [2024-11-20 08:42:20.544051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:33.159 [2024-11-20 08:42:20.544062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.159 [2024-11-20 08:42:20.544129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.159 [2024-11-20 08:42:20.544142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:33.159 [2024-11-20 08:42:20.544153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:33.159 [2024-11-20 08:42:20.544163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.159 [2024-11-20 08:42:20.544194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:33.159 [2024-11-20 08:42:20.545207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:33.159 [2024-11-20 08:42:20.545241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.159 [2024-11-20 08:42:20.545253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:33.159 [2024-11-20 08:42:20.545264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.061 ms 00:29:33.159 [2024-11-20 08:42:20.545274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.159 [2024-11-20 08:42:20.545655] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:33.159 [2024-11-20 08:42:20.569463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.159 [2024-11-20 08:42:20.569528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:33.159 [2024-11-20 08:42:20.569546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.844 ms 00:29:33.159 [2024-11-20 08:42:20.569557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.159 [2024-11-20 08:42:20.584607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.159 [2024-11-20 08:42:20.584675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:33.159 [2024-11-20 08:42:20.584694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:33.159 [2024-11-20 08:42:20.584705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.585354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.585375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:33.160 [2024-11-20 08:42:20.585388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:29:33.160 [2024-11-20 08:42:20.585399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.585467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.585484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:33.160 [2024-11-20 08:42:20.585495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:33.160 [2024-11-20 08:42:20.585506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.585535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.585546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:33.160 [2024-11-20 08:42:20.585556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:33.160 [2024-11-20 08:42:20.585566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.585596] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:33.160 [2024-11-20 08:42:20.590620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.590658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:33.160 [2024-11-20 08:42:20.590672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.040 ms 00:29:33.160 [2024-11-20 08:42:20.590682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.590724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.590735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:33.160 [2024-11-20 08:42:20.590745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:33.160 [2024-11-20 08:42:20.590756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.590809] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:33.160 [2024-11-20 08:42:20.590832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:33.160 [2024-11-20 08:42:20.590868] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:33.160 [2024-11-20 08:42:20.590890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:33.160 [2024-11-20 08:42:20.590979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:33.160 [2024-11-20 08:42:20.591012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:33.160 [2024-11-20 08:42:20.591025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:33.160 [2024-11-20 08:42:20.591038] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591062] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:33.160 [2024-11-20 08:42:20.591072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:33.160 [2024-11-20 08:42:20.591083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:33.160 [2024-11-20 08:42:20.591093] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:33.160 [2024-11-20 08:42:20.591104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.591119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:33.160 [2024-11-20 08:42:20.591130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:29:33.160 [2024-11-20 08:42:20.591140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.591215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.160 [2024-11-20 08:42:20.591226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:33.160 [2024-11-20 08:42:20.591237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:33.160 [2024-11-20 08:42:20.591247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.160 [2024-11-20 08:42:20.591340] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:33.160 [2024-11-20 08:42:20.591353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:33.160 [2024-11-20 08:42:20.591367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:33.160 [2024-11-20 08:42:20.591397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:33.160 [2024-11-20 08:42:20.591416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:33.160 [2024-11-20 08:42:20.591425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:33.160 [2024-11-20 08:42:20.591434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:33.160 [2024-11-20 08:42:20.591456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:33.160 [2024-11-20 08:42:20.591465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:33.160 [2024-11-20 08:42:20.591484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:33.160 [2024-11-20 08:42:20.591493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:33.160 [2024-11-20 08:42:20.591511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:33.160 [2024-11-20 08:42:20.591520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:33.160 [2024-11-20 08:42:20.591539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:33.160 [2024-11-20 08:42:20.591578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:33.160 [2024-11-20 08:42:20.591606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:33.160 [2024-11-20 08:42:20.591633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:33.160 [2024-11-20 08:42:20.591661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:33.160 [2024-11-20 08:42:20.591688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:33.160 [2024-11-20 08:42:20.591716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:33.160 [2024-11-20 08:42:20.591743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:33.160 [2024-11-20 08:42:20.591753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:33.160 [2024-11-20 08:42:20.591772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:33.160 [2024-11-20 08:42:20.591781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:33.160 [2024-11-20 08:42:20.591802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:33.160 [2024-11-20 08:42:20.591812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:33.160 [2024-11-20 08:42:20.591821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:33.160 [2024-11-20 08:42:20.591831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:33.160 [2024-11-20 08:42:20.591840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:33.160 [2024-11-20 08:42:20.591849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:33.160 [2024-11-20 08:42:20.591860] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:33.160 [2024-11-20 08:42:20.591872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.160 [2024-11-20 08:42:20.591884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:33.160 [2024-11-20 08:42:20.591894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:33.160 [2024-11-20 08:42:20.591904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:33.160 [2024-11-20 08:42:20.591914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:33.160 [2024-11-20 08:42:20.591925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:33.160 [2024-11-20 08:42:20.591935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:33.160 [2024-11-20 08:42:20.591945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:33.160 [2024-11-20 08:42:20.591955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:33.160 [2024-11-20 08:42:20.591966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.591976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.591997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.592008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.592018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.592029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:33.161 [2024-11-20 08:42:20.592039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:33.161 [2024-11-20 08:42:20.592051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.592062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.161 [2024-11-20 08:42:20.592073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:33.161 [2024-11-20 08:42:20.592084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:33.161 [2024-11-20 08:42:20.592095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:33.161 [2024-11-20 08:42:20.592106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.592121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:33.161 [2024-11-20 08:42:20.592131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.822 ms 00:29:33.161 [2024-11-20 08:42:20.592141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.629231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.629287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:33.161 [2024-11-20 08:42:20.629303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.090 ms 00:29:33.161 [2024-11-20 08:42:20.629314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.629376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.629388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:33.161 [2024-11-20 08:42:20.629400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:33.161 [2024-11-20 08:42:20.629410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.675844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.676113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:33.161 [2024-11-20 08:42:20.676139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.423 ms 00:29:33.161 [2024-11-20 08:42:20.676151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.676214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.676226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:33.161 [2024-11-20 08:42:20.676238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:33.161 [2024-11-20 08:42:20.676248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.676412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.676426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:33.161 [2024-11-20 08:42:20.676437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:29:33.161 [2024-11-20 08:42:20.676448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.676491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.676503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:33.161 [2024-11-20 08:42:20.676514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:33.161 [2024-11-20 08:42:20.676524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.697268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.697308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:33.161 [2024-11-20 08:42:20.697322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.749 ms 00:29:33.161 [2024-11-20 08:42:20.697350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.161 [2024-11-20 08:42:20.697492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.161 [2024-11-20 08:42:20.697509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:33.161 [2024-11-20 08:42:20.697520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:33.161 [2024-11-20 08:42:20.697531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.421 [2024-11-20 08:42:20.732146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.421 [2024-11-20 08:42:20.732215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:33.421 [2024-11-20 08:42:20.732230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.649 ms 00:29:33.421 [2024-11-20 08:42:20.732257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.421 [2024-11-20 08:42:20.747078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.421 [2024-11-20 08:42:20.747228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:33.422 [2024-11-20 08:42:20.747258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:29:33.422 [2024-11-20 08:42:20.747269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.831538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.831630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:33.422 [2024-11-20 08:42:20.831668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.331 ms 00:29:33.422 [2024-11-20 08:42:20.831679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.831856] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:33.422 [2024-11-20 08:42:20.831972] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:33.422 [2024-11-20 08:42:20.832098] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:33.422 [2024-11-20 08:42:20.832197] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:33.422 [2024-11-20 08:42:20.832211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.832221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:33.422 [2024-11-20 08:42:20.832233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:29:33.422 [2024-11-20 08:42:20.832244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.832344] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:33.422 [2024-11-20 08:42:20.832358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.832372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:33.422 [2024-11-20 08:42:20.832383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:33.422 [2024-11-20 08:42:20.832393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.854856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.854902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:33.422 [2024-11-20 08:42:20.854916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.470 ms 00:29:33.422 [2024-11-20 08:42:20.854927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.868705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.868743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:33.422 [2024-11-20 08:42:20.868755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:33.422 [2024-11-20 08:42:20.868766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.422 [2024-11-20 08:42:20.868862] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:33.422 [2024-11-20 08:42:20.869073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.422 [2024-11-20 08:42:20.869089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:33.422 [2024-11-20 08:42:20.869101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:29:33.422 [2024-11-20 08:42:20.869112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.991 [2024-11-20 08:42:21.435369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.991 [2024-11-20 08:42:21.435438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:33.991 [2024-11-20 08:42:21.435456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 566.042 ms 00:29:33.991 [2024-11-20 08:42:21.435469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.991 [2024-11-20 08:42:21.441579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.991 [2024-11-20 08:42:21.441753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:33.991 [2024-11-20 08:42:21.441775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.421 ms 00:29:33.991 [2024-11-20 08:42:21.441786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.991 [2024-11-20 08:42:21.442375] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:33.991 [2024-11-20 08:42:21.442405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.991 [2024-11-20 08:42:21.442417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:33.991 [2024-11-20 08:42:21.442429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:29:33.991 [2024-11-20 08:42:21.442440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.991 [2024-11-20 08:42:21.442577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.991 [2024-11-20 08:42:21.442594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:33.991 [2024-11-20 08:42:21.442606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:33.991 [2024-11-20 08:42:21.442616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.991 [2024-11-20 08:42:21.442660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 574.729 ms, result 0 00:29:33.991 [2024-11-20 08:42:21.442707] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:33.991 [2024-11-20 08:42:21.442780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.991 [2024-11-20 08:42:21.442791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:33.991 [2024-11-20 08:42:21.442801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:29:33.991 [2024-11-20 08:42:21.442811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.631 [2024-11-20 08:42:22.000422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.631 [2024-11-20 08:42:22.000494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:34.631 [2024-11-20 08:42:22.000511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 557.309 ms 00:29:34.631 [2024-11-20 08:42:22.000522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.631 [2024-11-20 08:42:22.006361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.631 [2024-11-20 08:42:22.006403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:34.631 [2024-11-20 08:42:22.006417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.196 ms 00:29:34.631 [2024-11-20 08:42:22.006428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.631 [2024-11-20 08:42:22.006926] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:34.631 [2024-11-20 08:42:22.006947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.631 [2024-11-20 08:42:22.006958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:34.631 [2024-11-20 08:42:22.006970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:29:34.631 [2024-11-20 08:42:22.006980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.007023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.007035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:34.632 [2024-11-20 08:42:22.007045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:34.632 [2024-11-20 08:42:22.007055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.007094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 565.302 ms, result 0 00:29:34.632 [2024-11-20 08:42:22.007136] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:34.632 [2024-11-20 08:42:22.007148] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:34.632 [2024-11-20 08:42:22.007161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.007171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:34.632 [2024-11-20 08:42:22.007183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1140.165 ms 00:29:34.632 [2024-11-20 08:42:22.007193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.007222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.007234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:34.632 [2024-11-20 08:42:22.007249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:34.632 [2024-11-20 08:42:22.007259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.019604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:34.632 [2024-11-20 08:42:22.019883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.019931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:34.632 [2024-11-20 08:42:22.020037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.626 ms 00:29:34.632 [2024-11-20 08:42:22.020080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.020756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.020879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:34.632 [2024-11-20 08:42:22.020969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:29:34.632 [2024-11-20 08:42:22.021022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.023082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.023206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:34.632 [2024-11-20 08:42:22.023343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.012 ms 00:29:34.632 [2024-11-20 08:42:22.023382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.023451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.023525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:34.632 [2024-11-20 08:42:22.023606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:34.632 [2024-11-20 08:42:22.023643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.023768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.023862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:34.632 [2024-11-20 08:42:22.023898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:34.632 [2024-11-20 08:42:22.023928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.023974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.024018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:34.632 [2024-11-20 08:42:22.024093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:34.632 [2024-11-20 08:42:22.024128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.024188] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:34.632 [2024-11-20 08:42:22.024225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.024255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:34.632 [2024-11-20 08:42:22.024285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:34.632 [2024-11-20 08:42:22.024416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.024510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.632 [2024-11-20 08:42:22.024545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:34.632 [2024-11-20 08:42:22.024587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:34.632 [2024-11-20 08:42:22.024618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.632 [2024-11-20 08:42:22.025670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1483.679 ms, result 0 00:29:34.632 [2024-11-20 08:42:22.040437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.632 [2024-11-20 08:42:22.056416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:34.632 [2024-11-20 08:42:22.066108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@871 -- # return 0 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.632 Validate MD5 checksum, iteration 1 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.632 08:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.905 [2024-11-20 08:42:22.195581] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:34.905 [2024-11-20 08:42:22.195909] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81007 ] 00:29:34.905 [2024-11-20 08:42:22.377071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.163 [2024-11-20 08:42:22.495249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.065  [2024-11-20T08:42:24.884Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-20T08:42:29.074Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:29:41.513 00:29:41.513 08:42:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:41.513 08:42:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:42.893 Validate MD5 checksum, iteration 2 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=92ff8a19bf8c681adbedda5859908408 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 92ff8a19bf8c681adbedda5859908408 != \9\2\f\f\8\a\1\9\b\f\8\c\6\8\1\a\d\b\e\d\d\a\5\8\5\9\9\0\8\4\0\8 ]] 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:42.893 08:42:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:42.893 [2024-11-20 08:42:30.171331] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:42.893 [2024-11-20 08:42:30.171691] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81092 ] 00:29:42.893 [2024-11-20 08:42:30.352684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.153 [2024-11-20 08:42:30.474069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.055  [2024-11-20T08:42:32.616Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-20T08:42:35.904Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:29:48.343 00:29:48.343 08:42:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:48.343 08:42:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12c0d0e1e7a0fc9b7bfb962bc5522438 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12c0d0e1e7a0fc9b7bfb962bc5522438 != \1\2\c\0\d\0\e\1\e\7\a\0\f\c\9\b\7\b\f\b\9\6\2\b\c\5\5\2\2\4\3\8 ]] 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:49.723 08:42:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80972 ]] 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80972 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' -z 80972 ']' 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@961 -- # kill -0 80972 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # uname 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80972 00:29:49.723 killing process with pid 80972 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80972' 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # kill 80972 00:29:49.723 08:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@981 -- # wait 80972 00:29:51.107 [2024-11-20 08:42:38.247702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:51.107 [2024-11-20 08:42:38.267417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.267463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:51.107 [2024-11-20 08:42:38.267479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:51.107 [2024-11-20 08:42:38.267490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.267512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:51.107 [2024-11-20 08:42:38.271703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.271734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:51.107 [2024-11-20 08:42:38.271747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.183 ms 00:29:51.107 [2024-11-20 08:42:38.271762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.271957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.271970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:51.107 [2024-11-20 08:42:38.271981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.172 ms 00:29:51.107 [2024-11-20 08:42:38.272007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.273026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.273058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:51.107 [2024-11-20 08:42:38.273070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.003 ms 00:29:51.107 [2024-11-20 08:42:38.273080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.274017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.274167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:51.107 [2024-11-20 08:42:38.274186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.901 ms 00:29:51.107 [2024-11-20 08:42:38.274197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.289269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.289417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:51.107 [2024-11-20 08:42:38.289439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.039 ms 00:29:51.107 [2024-11-20 08:42:38.289455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.107 [2024-11-20 08:42:38.297131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.107 [2024-11-20 08:42:38.297167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:51.108 [2024-11-20 08:42:38.297181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.651 ms 00:29:51.108 [2024-11-20 08:42:38.297191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.297280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.297293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:51.108 [2024-11-20 08:42:38.297304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:51.108 [2024-11-20 08:42:38.297314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.312081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.312237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:51.108 [2024-11-20 08:42:38.312258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.767 ms 00:29:51.108 [2024-11-20 08:42:38.312268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.327321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.327465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:51.108 [2024-11-20 08:42:38.327487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.039 ms 00:29:51.108 [2024-11-20 08:42:38.327497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.342095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.342136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:51.108 [2024-11-20 08:42:38.342150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.542 ms 00:29:51.108 [2024-11-20 08:42:38.342160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.356748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.356786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:51.108 [2024-11-20 08:42:38.356800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.537 ms 00:29:51.108 [2024-11-20 08:42:38.356810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.356844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:51.108 [2024-11-20 08:42:38.356861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.108 [2024-11-20 08:42:38.356874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.108 [2024-11-20 08:42:38.356885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:51.108 [2024-11-20 08:42:38.356897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.356983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.108 [2024-11-20 08:42:38.357076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:51.108 [2024-11-20 08:42:38.357086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 18f613ac-8844-4956-8da6-509abc960fbc 00:29:51.108 [2024-11-20 08:42:38.357097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:51.108 [2024-11-20 08:42:38.357127] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:51.108 [2024-11-20 08:42:38.357136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:51.108 [2024-11-20 08:42:38.357147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:51.108 [2024-11-20 08:42:38.357157] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:51.108 [2024-11-20 08:42:38.357168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:51.108 [2024-11-20 08:42:38.357177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:51.108 [2024-11-20 08:42:38.357187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:51.108 [2024-11-20 08:42:38.357196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:51.108 [2024-11-20 08:42:38.357208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.357224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:51.108 [2024-11-20 08:42:38.357235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:29:51.108 [2024-11-20 08:42:38.357245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.377478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.377521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:51.108 [2024-11-20 08:42:38.377535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.233 ms 00:29:51.108 [2024-11-20 08:42:38.377545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.378113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.108 [2024-11-20 08:42:38.378126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:51.108 [2024-11-20 08:42:38.378138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:29:51.108 [2024-11-20 08:42:38.378148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.444129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.108 [2024-11-20 08:42:38.444374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:51.108 [2024-11-20 08:42:38.444400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.108 [2024-11-20 08:42:38.444411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.444475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.108 [2024-11-20 08:42:38.444486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:51.108 [2024-11-20 08:42:38.444497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.108 [2024-11-20 08:42:38.444507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.444614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.108 [2024-11-20 08:42:38.444628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:51.108 [2024-11-20 08:42:38.444639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.108 [2024-11-20 08:42:38.444648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.444668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.108 [2024-11-20 08:42:38.444684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:51.108 [2024-11-20 08:42:38.444694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.108 [2024-11-20 08:42:38.444704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.108 [2024-11-20 08:42:38.568873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.108 [2024-11-20 08:42:38.568936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:51.108 [2024-11-20 08:42:38.568952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.108 [2024-11-20 08:42:38.568963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.367 [2024-11-20 08:42:38.673969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.367 [2024-11-20 08:42:38.674061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:51.367 [2024-11-20 08:42:38.674076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.367 [2024-11-20 08:42:38.674086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.367 [2024-11-20 08:42:38.674195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:51.368 [2024-11-20 08:42:38.674218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:51.368 [2024-11-20 08:42:38.674314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:51.368 [2024-11-20 08:42:38.674464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:51.368 [2024-11-20 08:42:38.674540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:51.368 [2024-11-20 08:42:38.674612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.368 [2024-11-20 08:42:38.674673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:51.368 [2024-11-20 08:42:38.674688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.368 [2024-11-20 08:42:38.674698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.368 [2024-11-20 08:42:38.674813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 408.026 ms, result 0 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:52.743 Remove shared memory files 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80753 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:52.743 ************************************ 00:29:52.743 END TEST ftl_upgrade_shutdown 00:29:52.743 ************************************ 00:29:52.743 00:29:52.743 real 1m30.132s 00:29:52.743 user 2m4.267s 00:29:52.743 sys 0m21.684s 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1133 -- # xtrace_disable 00:29:52.743 08:42:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:52.743 Process with pid 73675 is not found 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@14 -- # killprocess 73675 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@957 -- # '[' -z 73675 ']' 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@961 -- # kill -0 73675 00:29:52.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (73675) - No such process 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@984 -- # echo 'Process with pid 73675 is not found' 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81231 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81231 00:29:52.743 08:42:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@838 -- # '[' -z 81231 ']' 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@843 -- # local max_retries=100 00:29:52.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@847 -- # xtrace_disable 00:29:52.743 08:42:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:52.743 [2024-11-20 08:42:40.131292] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:29:52.743 [2024-11-20 08:42:40.131447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81231 ] 00:29:53.001 [2024-11-20 08:42:40.312797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.001 [2024-11-20 08:42:40.428610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.936 08:42:41 ftl -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:29:53.936 08:42:41 ftl -- common/autotest_common.sh@871 -- # return 0 00:29:53.936 08:42:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:54.195 nvme0n1 00:29:54.195 08:42:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:54.195 08:42:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:54.195 08:42:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.477 08:42:41 ftl -- ftl/common.sh@28 -- # stores=7f6ef4d4-fe1a-4704-8579-408f2ddee09f 00:29:54.477 08:42:41 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:54.477 08:42:41 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f6ef4d4-fe1a-4704-8579-408f2ddee09f 00:29:54.735 08:42:42 ftl -- ftl/ftl.sh@23 -- # killprocess 81231 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@957 -- # '[' -z 81231 ']' 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@961 -- # kill -0 81231 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@962 -- # uname 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 81231 00:29:54.735 killing process with pid 81231 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@975 -- # echo 'killing process with pid 81231' 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@976 -- # kill 81231 00:29:54.735 08:42:42 ftl -- common/autotest_common.sh@981 -- # wait 81231 00:29:57.272 08:42:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:57.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:57.531 Waiting for block devices as requested 00:29:57.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.789 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.789 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:03.057 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:03.057 08:42:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:03.057 Remove shared memory files 00:30:03.057 08:42:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:03.057 08:42:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:03.057 08:42:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:03.057 08:42:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:03.057 08:42:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:03.057 08:42:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:03.057 ************************************ 00:30:03.057 END TEST ftl 00:30:03.057 ************************************ 00:30:03.057 00:30:03.057 real 10m58.897s 00:30:03.057 user 13m31.774s 00:30:03.057 sys 1m29.216s 00:30:03.057 08:42:50 ftl -- common/autotest_common.sh@1133 -- # xtrace_disable 00:30:03.058 08:42:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:03.058 08:42:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:03.058 08:42:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:03.058 08:42:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:03.058 08:42:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:03.058 08:42:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:03.058 08:42:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:03.058 08:42:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:03.058 08:42:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:30:03.058 08:42:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:30:03.058 08:42:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:30:03.058 08:42:50 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:03.058 08:42:50 -- common/autotest_common.sh@10 -- # set +x 00:30:03.058 08:42:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:30:03.058 08:42:50 -- common/autotest_common.sh@1384 -- # local autotest_es=0 00:30:03.058 08:42:50 -- common/autotest_common.sh@1385 -- # xtrace_disable 00:30:03.058 08:42:50 -- common/autotest_common.sh@10 -- # set +x 00:30:04.977 INFO: APP EXITING 00:30:04.977 INFO: killing all VMs 00:30:04.977 INFO: killing vhost app 00:30:04.977 INFO: EXIT DONE 00:30:05.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:06.111 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:06.111 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:06.111 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:06.111 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:06.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:06.939 Cleaning 00:30:06.939 Removing: /var/run/dpdk/spdk0/config 00:30:06.939 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:06.939 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:06.939 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:06.939 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:06.939 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:06.939 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:06.939 Removing: /var/run/dpdk/spdk0 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57002 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57243 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57484 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57593 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57654 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57783 00:30:06.939 Removing: /var/run/dpdk/spdk_pid57812 00:30:06.939 Removing: /var/run/dpdk/spdk_pid58040 00:30:06.939 Removing: /var/run/dpdk/spdk_pid58163 00:30:06.939 Removing: /var/run/dpdk/spdk_pid58282 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58415 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58535 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58574 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58611 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58693 00:30:07.197 Removing: /var/run/dpdk/spdk_pid58810 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59282 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59364 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59444 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59465 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59627 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59649 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59810 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59832 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59901 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59925 00:30:07.197 Removing: /var/run/dpdk/spdk_pid59994 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60018 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60223 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60261 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60356 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60562 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60668 00:30:07.197 Removing: /var/run/dpdk/spdk_pid60710 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61182 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61281 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61396 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61449 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61480 00:30:07.197 Removing: /var/run/dpdk/spdk_pid61570 00:30:07.197 Removing: /var/run/dpdk/spdk_pid62217 00:30:07.198 Removing: /var/run/dpdk/spdk_pid62260 00:30:07.198 Removing: /var/run/dpdk/spdk_pid62759 00:30:07.198 Removing: /var/run/dpdk/spdk_pid62863 00:30:07.198 Removing: /var/run/dpdk/spdk_pid62984 00:30:07.198 Removing: /var/run/dpdk/spdk_pid63037 00:30:07.198 Removing: /var/run/dpdk/spdk_pid63068 00:30:07.198 Removing: /var/run/dpdk/spdk_pid63093 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65009 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65155 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65164 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65176 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65223 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65227 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65239 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65289 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65293 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65305 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65350 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65358 00:30:07.198 Removing: /var/run/dpdk/spdk_pid65371 00:30:07.198 Removing: /var/run/dpdk/spdk_pid66796 00:30:07.198 Removing: /var/run/dpdk/spdk_pid66917 00:30:07.198 Removing: /var/run/dpdk/spdk_pid68358 00:30:07.198 Removing: /var/run/dpdk/spdk_pid69745 00:30:07.198 Removing: /var/run/dpdk/spdk_pid69860 00:30:07.198 Removing: /var/run/dpdk/spdk_pid69965 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70078 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70206 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70286 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70445 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70821 00:30:07.198 Removing: /var/run/dpdk/spdk_pid70863 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71313 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71499 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71603 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71718 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71776 00:30:07.198 Removing: /var/run/dpdk/spdk_pid71803 00:30:07.198 Removing: /var/run/dpdk/spdk_pid72109 00:30:07.198 Removing: /var/run/dpdk/spdk_pid72175 00:30:07.456 Removing: /var/run/dpdk/spdk_pid72266 00:30:07.456 Removing: /var/run/dpdk/spdk_pid72708 00:30:07.456 Removing: /var/run/dpdk/spdk_pid72856 00:30:07.456 Removing: /var/run/dpdk/spdk_pid73675 00:30:07.456 Removing: /var/run/dpdk/spdk_pid73835 00:30:07.456 Removing: /var/run/dpdk/spdk_pid74055 00:30:07.456 Removing: /var/run/dpdk/spdk_pid74174 00:30:07.456 Removing: /var/run/dpdk/spdk_pid74499 00:30:07.456 Removing: /var/run/dpdk/spdk_pid74780 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75148 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75366 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75515 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75579 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75728 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75765 00:30:07.456 Removing: /var/run/dpdk/spdk_pid75830 00:30:07.456 Removing: /var/run/dpdk/spdk_pid76051 00:30:07.456 Removing: /var/run/dpdk/spdk_pid76298 00:30:07.456 Removing: /var/run/dpdk/spdk_pid76758 00:30:07.456 Removing: /var/run/dpdk/spdk_pid77171 00:30:07.456 Removing: /var/run/dpdk/spdk_pid77556 00:30:07.456 Removing: /var/run/dpdk/spdk_pid78010 00:30:07.456 Removing: /var/run/dpdk/spdk_pid78163 00:30:07.456 Removing: /var/run/dpdk/spdk_pid78263 00:30:07.456 Removing: /var/run/dpdk/spdk_pid78833 00:30:07.456 Removing: /var/run/dpdk/spdk_pid78912 00:30:07.456 Removing: /var/run/dpdk/spdk_pid79328 00:30:07.456 Removing: /var/run/dpdk/spdk_pid79687 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80179 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80301 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80365 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80431 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80488 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80556 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80753 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80835 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80903 00:30:07.456 Removing: /var/run/dpdk/spdk_pid80972 00:30:07.456 Removing: /var/run/dpdk/spdk_pid81007 00:30:07.456 Removing: /var/run/dpdk/spdk_pid81092 00:30:07.456 Removing: /var/run/dpdk/spdk_pid81231 00:30:07.457 Clean 00:30:07.457 08:42:54 -- common/autotest_common.sh@1441 -- # return 0 00:30:07.457 08:42:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:30:07.457 08:42:54 -- common/autotest_common.sh@735 -- # xtrace_disable 00:30:07.457 08:42:54 -- common/autotest_common.sh@10 -- # set +x 00:30:07.715 08:42:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:30:07.715 08:42:55 -- common/autotest_common.sh@735 -- # xtrace_disable 00:30:07.715 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:30:07.715 08:42:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:07.715 08:42:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:07.715 08:42:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:07.715 08:42:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:30:07.715 08:42:55 -- spdk/autotest.sh@398 -- # hostname 00:30:07.715 08:42:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:07.988 geninfo: WARNING: invalid characters removed from testname! 00:30:34.564 08:43:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:36.468 08:43:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:38.373 08:43:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:40.907 08:43:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:42.836 08:43:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:44.784 08:43:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.316 08:43:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:47.316 08:43:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:47.316 08:43:34 -- common/autotest_common.sh@741 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:47.316 08:43:34 -- common/autotest_common.sh@743 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:47.316 08:43:34 -- common/autotest_common.sh@744 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:47.316 08:43:34 -- common/autotest_common.sh@747 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:47.316 + [[ -n 5260 ]] 00:30:47.316 + sudo kill 5260 00:30:47.325 [Pipeline] } 00:30:47.342 [Pipeline] // timeout 00:30:47.349 [Pipeline] } 00:30:47.366 [Pipeline] // stage 00:30:47.371 [Pipeline] } 00:30:47.386 [Pipeline] // catchError 00:30:47.396 [Pipeline] stage 00:30:47.398 [Pipeline] { (Stop VM) 00:30:47.411 [Pipeline] sh 00:30:47.693 + vagrant halt 00:30:51.135 ==> default: Halting domain... 00:30:57.713 [Pipeline] sh 00:30:58.026 + vagrant destroy -f 00:31:01.314 ==> default: Removing domain... 00:31:01.586 [Pipeline] sh 00:31:01.864 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:31:01.871 [Pipeline] } 00:31:01.883 [Pipeline] // stage 00:31:01.888 [Pipeline] } 00:31:01.900 [Pipeline] // dir 00:31:01.904 [Pipeline] } 00:31:01.916 [Pipeline] // wrap 00:31:01.922 [Pipeline] } 00:31:01.934 [Pipeline] // catchError 00:31:01.943 [Pipeline] stage 00:31:01.945 [Pipeline] { (Epilogue) 00:31:01.957 [Pipeline] sh 00:31:02.239 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:08.812 [Pipeline] catchError 00:31:08.813 [Pipeline] { 00:31:08.825 [Pipeline] sh 00:31:09.106 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:09.107 Artifacts sizes are good 00:31:09.376 [Pipeline] } 00:31:09.391 [Pipeline] // catchError 00:31:09.402 [Pipeline] archiveArtifacts 00:31:09.411 Archiving artifacts 00:31:09.529 [Pipeline] cleanWs 00:31:09.540 [WS-CLEANUP] Deleting project workspace... 00:31:09.540 [WS-CLEANUP] Deferred wipeout is used... 00:31:09.547 [WS-CLEANUP] done 00:31:09.549 [Pipeline] } 00:31:09.565 [Pipeline] // stage 00:31:09.570 [Pipeline] } 00:31:09.584 [Pipeline] // node 00:31:09.590 [Pipeline] End of Pipeline 00:31:09.638 Finished: SUCCESS